Using Page Tables, E.g., Page Table Structures, Etc. (epo) Patents (Class 711/E12.059)
  • Publication number: 20120084593
    Abstract: A method for providing applications with a current time value includes receiving a trap for an application to access a time memory page, creating, in a memory map corresponding to the application, a mapping between an address space of the application and the time memory page in response to the trap, accessing, based on the trap, a hardware clock to obtain a time value, and updating the time memory page with the time value. The application reads the time value from the time memory page using the memory map.
    Type: Application
    Filed: October 5, 2010
    Publication date: April 5, 2012
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: David Dice, Timothy Paul Marsland
  • Publication number: 20120072658
    Abstract: Provided is a program, control method, and control device that can shorten start-up time. Page table entry is rewritten for a Memory Management Unit (MMU) table, on a computer system equipped with an MMU, so that a page fault will occur at every page, for all the pages necessary for the operation of a software program. Upon start-up, the stored memory image is loaded in page units for page faults that have occurred on the RAM to be accessed. Loading of unnecessary pages will not be executed, because such loading was executed, and the start-up time can be shortened worth that time. This program, control method, and control device can be applied to personal computers, and electronic devices equipped with built-in type computers.
    Type: Application
    Filed: March 5, 2010
    Publication date: March 22, 2012
    Inventor: Kenichi Hashimoto
  • Publication number: 20120072694
    Abstract: A storage system and method is provided including physical storage devices controlled by storage control devices constituting a storage control layer operatively coupled to the physical storage devices and hosts. The storage control layer includes a first virtual layer interfacing with the hosts, operable to represent a logical address space available to said hosts and characterized by an Internal Virtual Address Space (IVAS); a second virtual layer characterized by a Physical Virtual Address Space (PVAS), interfacing with the physical storage devices, and operable to represent an available storage space; and an allocation module operatively coupled to the first and second virtual layers and providing mapping between IVAP and PVAS. Each address in PVAS is configured to have a corresponding address in IVAS. The allocation module facilitates management of IVAS and PVAS, enabling separation of a process of deleting certain logical object into processes performing changes in IVAS and PVAS, respectively.
    Type: Application
    Filed: August 11, 2011
    Publication date: March 22, 2012
    Applicant: INFINIDAT LTD.
    Inventors: Yechiel YOCHAI, Leo CORRY, Haim KOPYLOVITZ, Ido BEN-TSION
  • Patent number: 8140820
    Abstract: A data processing apparatus has address translation circuitry which is responsive to an access request specifying a virtual address, to perform a multi-stage address translation process to produce, via at least one intermediate address, a physical address in memory corresponding to the virtual address. The address translation circuitry references a storage unit, with each entry of the storage unit storing address translation information for one or more virtual addresses. Each entry has a field indicating whether the address translation information is consolidated address translation information or partial address translation information. If when processing an access request, it is determined that the relevant entry in the storage unit provides consolidated address translation information, the address translation circuitry produces a physical address directly from the consolidated address translation information.
    Type: Grant
    Filed: May 21, 2008
    Date of Patent: March 20, 2012
    Assignee: ARM Limited
    Inventors: David Hennah Mansell, Richard Roy Grisenthwaite
  • Patent number: 8140781
    Abstract: The invention relates generally to computer memory access. Embodiments of the invention provide a multi-level page-walk apparatus and method that enable I/O devices to execute multi-level page-walks with an out-of-order memory controller. In embodiments of the invention, the multi-level page-walk apparatus includes a demotion-based priority grant arbiter, a page-walk tracking queue, a page-walk completion queue, and a command packetizer.
    Type: Grant
    Filed: December 31, 2007
    Date of Patent: March 20, 2012
    Assignee: Intel Corporation
    Inventors: Chee Hak Teh, Arthur D Hunter
  • Publication number: 20120059973
    Abstract: Some embodiments of the present invention include a memory management unit (MMU) configured to, in response to a write access targeting a guest page mapping of a guest virtual page number (GVPN) to a guest physical page number (GPPN) within a guest page table, identify a shadow page mapping that associates the GVPN with a physical page number (PPN). The MMU is also configured to determine whether a traced write indication is associated with the shadow page mapping and, if so, record update information identifying the targeted guest page mapping. The update information is used to reestablish coherence between the guest page mapping and the shadow page mapping. The MMU is further configured to perform the write access.
    Type: Application
    Filed: November 15, 2011
    Publication date: March 8, 2012
    Applicant: VMWARE, INC.
    Inventors: Keith ADAMS, Sahil RIHAN
  • Publication number: 20120060012
    Abstract: A virtual memory management unit can implement various techniques for managing paging space. The virtual memory management unit can monitor a number of unallocated large sized pages and can determine when the number of unallocated large sized pages drops below a page threshold. Unallocated contiguous smaller-sized pages can be aggregated to obtain unallocated larger-sized pages, which can then be allocated to processes as required to improve efficiency of disk I/O operations. Allocated smaller-sized pages can also be reorganized to obtain the unallocated contiguous smaller-sized pages that can then be aggregated to yield the larger-sized pages. Furthermore, content can also be compressed before being written to the paging space to reduce the number of pages that are to be allocated to processes. This can enable efficient management of the paging space without terminating processes.
    Type: Application
    Filed: September 3, 2010
    Publication date: March 8, 2012
    Applicant: International Business Machines Corporation
    Inventors: Bret R. Olszewski, Basu Vaidyanathan
  • Publication number: 20120054407
    Abstract: Embodiments of the invention provide object-based tier management to improve the allocation of objects to different media of different speeds based on access characteristics such as access frequency. One embodiment is directed to a method of managing object-based data in an information system which includes an application server and a storage system. The method comprises receiving a write command including a first data to be written into a virtual volume; identifying an object to which the first data corresponds; checking if a second data corresponding to the object has been stored in the virtual volume; if the second data has been stored in a page of the virtual volume, checking if the page which stores the second data has a vacancy area; and if the page has a vacancy area, writing the first data in the page which stores the second data.
    Type: Application
    Filed: August 30, 2010
    Publication date: March 1, 2012
    Applicant: HITACHI, LTD.
    Inventors: Shinichi HAYASHI, Keiichi MATSUZAWA, Toshio OTANI
  • Publication number: 20120054466
    Abstract: A computer implemented method optimizes memory page sizes during runtime. A process is identified from a policy file. The policy file contains at least one policy based threshold. A resource usage profiler monitors the process during runtime. The resource usage profiler determines whether the process exceeds the set of stated desired policies from the at least one policy based threshold. If the process exceeds the set of stated desired policies from the set of policy based thresholds, a performance projection for the process is executed to determine whether the process would experience a performance benefit from a different page size. Responsive to determining that the process would experience the performance benefit from the different page size, the page size for the process is changed.
    Type: Application
    Filed: August 27, 2010
    Publication date: March 1, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Saravanan Devendran, Kiran Grover
  • Publication number: 20120054425
    Abstract: In one embodiment, a processor includes an address generation unit having a memory context logic to determine whether a memory context identifier associated with an address of a memory access request corresponds to an agent memory context identifier for the processor, and to handle the memory address request based on the determination. Other embodiments are described and claimed.
    Type: Application
    Filed: August 31, 2010
    Publication date: March 1, 2012
    Inventor: Ramon Matas
  • Publication number: 20120030406
    Abstract: A system and method is illustrated for comparing a target memory address and a local memory size using a hypervisor module that resides upon a compute blade, the comparison based upon a unit of digital information for the target memory address and an additional unit of digital information for the local memory size. Additionally, the system and method utilizes swapping of a local virtual memory page with a remote virtual memory page using a swapping module that resides on the hypervisor module, the swapping based upon the comparing of the target memory address and the local memory size. Further, the system and method is implemented to transmit the local virtual memory page to a memory blade using a transmission module that resides upon the compute blade.
    Type: Application
    Filed: June 29, 2009
    Publication date: February 2, 2012
    Inventors: Jichuan Chang, Kevin Lim, Partha Ranganathan
  • Publication number: 20120030407
    Abstract: A system and related method of operation for migrating the memory of a virtual machine from one NUMA node to another. Once the VM is migrated to a new node, migration of memory pages is performed while giving priority to the most utilized pages, so that access to these pages becomes local as soon as possible. Various heuristics are described to enable different implementations for different situations or scenarios.
    Type: Application
    Filed: October 11, 2011
    Publication date: February 2, 2012
    Applicant: VMWARE, INC.
    Inventors: Vivek PANDEY, Ole AGESEN, Alex GARTHWAITE, Carl WALDSPURGER, Rajesh VENKATASUBRAMANIAN
  • Publication number: 20120017031
    Abstract: A method for transferring guest physical memory from a source host to a destination host during live migration of a virtual machine (VM) involves creating a file on a shared datastore, the file on the shared datastore being accessible to both the source host and the destination host. Pages of the guest physical memory are transferred from the source host to the destination host over a network connection and pages of the guest physical memory are written to the file so that the destination host can retrieve the written guest physical pages from the file.
    Type: Application
    Filed: July 13, 2011
    Publication date: January 19, 2012
    Applicant: VMware, Inc.
    Inventors: Ali MASHTIZADEH, Gabriel TARASUK-LEVIN
  • Publication number: 20120017064
    Abstract: An information processing apparatus is disclosed which is connected to a network and which includes: an address translation section configured such that when a virtual address assigned to a virtual storage area is held in an address translation module and associated therein with network node information designating the location of a storage portion connected to the network and with a physical address in the storage portion, the address translation section translates the virtual address into the network node information and the physical address based on the address translation module; and an access communication section configured such that based on the network node information and the physical address acquired by the address translation section, the access communication section accesses one of a plurality of storage areas held by the storage portion connected to the network, the accessed storage area being designated by the physical address.
    Type: Application
    Filed: July 7, 2011
    Publication date: January 19, 2012
    Applicant: SONY CORPORATION
    Inventor: Yasuki Sasaki
  • Publication number: 20120017028
    Abstract: A mechanism for random cache line selection in virtualization systems is disclosed. A method includes maintaining a secondary data structure representing a plurality of memory pages, the secondary data structure indexed by a subset of each memory page, determining an index of a received new memory page by utilizing a subset of the new memory page that is a same size and at a same offset as the subset of each memory page, comparing the index of the new memory page with the indices of the secondary data structure for a match, utilizing a main data structure to perform a full page memory comparison with the new memory page if a match is found in the secondary data structure, and updating at least one of the size of the subset, the number of subsets, and the offsets of the subsets used to index the memory page.
    Type: Application
    Filed: July 13, 2010
    Publication date: January 19, 2012
    Inventor: Michael Tsirkin
  • Publication number: 20120017027
    Abstract: Page data of a virtual machine is represented for efficient save and restore operations. One form of representation applies to each page with an easily identifiable pattern. The page is described, saved, and restored in terms of metadata reflective of the pattern rather than a complete page of data reflecting the pattern. During a save or restore operation, however, the metadata of the page is represented, but not the page data. Another form of representation applies to each page sharing a canonical instance of a complex pattern that is instantiated in memory during execution, and explicitly saved and restored. Each page sharing the canonical page is saved and restored as a metadata reference, without the need to actually save redundant copies of the page data.
    Type: Application
    Filed: July 13, 2010
    Publication date: January 19, 2012
    Applicant: VMWARE, INC.
    Inventors: Yury BASKAKOV, Alexander Thomas GARTHWAITE, Jesse POOL, Carl A. WALDSPURGER, Rajesh VENKATASUBRAMANIAN, Ishan BANERJEE
  • Publication number: 20120017029
    Abstract: Example methods, apparatus, and articles of manufacture to share memory spaces for access by hardware and software in a virtual machine environment are disclosed. A disclosed example method involves enabling a sharing of a memory page of a source domain executing on a first virtual machine with a destination domain executing on a second virtual machine. The example method also involves mapping the memory page to an address space of the destination domain and adding an address translation entry for the memory page in a table. In addition, the example method involves sharing the memory page with a hardware device for direct memory access of the memory page by the hardware device.
    Type: Application
    Filed: July 16, 2010
    Publication date: January 19, 2012
    Inventors: Jose Renato G. Santos, Yoshio Turner
  • Publication number: 20120011341
    Abstract: What is provided is a load page table entry address function defined for a machine architecture of a computer system. In one embodiment, a machine instruction is obtained which contains an opcode indicating that a load page table entry address function is to be performed. The machine instruction contains an M field, a first field identifying a first general register, and a second field identifying a second general register. Based on the contents of the M field, an initial origin address of a hierarchy of address translation tables having at least one segment table is obtained. Based on the obtained initial origin address, dynamic address translation is performed until a page table entry is obtained. The page table entry address is saved in the identified first general register.
    Type: Application
    Filed: September 16, 2011
    Publication date: January 12, 2012
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Dan F. Greiner, Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Gustav E. Sittmann
  • Publication number: 20110320681
    Abstract: Memory management of processing systems running in a virtual computer environment and of processes running in an operating system environment includes identifying a usage pattern of a page in memory. The usage pattern is identified by tracking operations conducted with respect to the page. The memory management also includes designating the page as a candidate for sharing when the usage pattern reflects that a number of updates made to the page does not exceed a predefined threshold value. The candidate page is allocated to a first process or virtual machine. The memory management also includes sharing access to the candidate page with a second process or virtual machine when content in the candidate page matches content of page allocated for the second process or virtual machine to an address space of the candidate page.
    Type: Application
    Filed: June 28, 2010
    Publication date: December 29, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Christian Borntraeger, Christian Ehrhardt, Carsten Otte, Martin Schwidefsky, Ulrich Weigand
  • Publication number: 20110307651
    Abstract: A non-volatile memory data address translation scheme is described that utilizes a hierarchal address translation system that is stored in the non-volatile memory itself. Embodiments of the present invention utilize a hierarchal address data and translation system wherein the address translation data entries are stored in one or more data structures/tables in the hierarchy, one or more of which can be updated in-place multiple times without having to overwrite data. This hierarchal address translation data structure and multiple update of data entries in the individual tables/data structures allow the hierarchal address translation data structure to be efficiently stored in a non-volatile memory array without markedly inducing write fatigue or adversely affecting the lifetime of the part. The hierarchal address translation of embodiments of the present invention also allow for an address translation layer that does not have to be resident in system RAM for operation.
    Type: Application
    Filed: August 23, 2011
    Publication date: December 15, 2011
    Inventor: Wanmo WONG
  • Patent number: 8078827
    Abstract: A method for caching of page translations for virtual machines includes managing a number of virtual machines using a guest page table of a guest operating system, which provides a first translation from a guest-virtual memory address to a first guest-physical memory address or an invalid entry, and a host page table of a host operating system, which provides a second translation from the first guest-physical memory address to a host-physical memory address or an invalid entry, and managing a cache page table, wherein the cache page table selectively provides a third translation from the guest-virtual memory address to the host-physical memory address, a second guest-physical memory address or an invalid entry.
    Type: Grant
    Filed: July 5, 2007
    Date of Patent: December 13, 2011
    Assignee: International Business Machines Corporation
    Inventors: Volkmar Uhlig, Leendert van Doorn
  • Publication number: 20110296135
    Abstract: There is provided a computer-executed method of freeing memory. One exemplary method comprises receiving a message from a user process. The message may specify a virtual address for a memory segment. The virtual address may be mapped to the memory segment. The memory segment may comprise a physical page. The method may further comprise identifying the physical page based on the virtual address. Additionally, the method may comprise freeing the physical page without unmapping the memory segment.
    Type: Application
    Filed: May 27, 2010
    Publication date: December 1, 2011
    Inventors: Viral S. Shah, Sudhir R. Shetiya, Anoop Sharma, Lars B. Plum
  • Publication number: 20110296136
    Abstract: Two translation lookaside buffers may be provided for simpler operation in some embodiments. A hardware managed lookaside buffer may handle traditional operations. A software managed lookaside buffer may be particularly involved in locking particular translations. As a result, the software's job is made simpler since it has a relatively simpler, software managed translation lookaside buffer to manage for locking translations.
    Type: Application
    Filed: August 3, 2011
    Publication date: December 1, 2011
    Inventors: Dennis M. O'Connor, Stephen J. Strazdus
  • Publication number: 20110283048
    Abstract: This disclosure is related to systems and methods for a structured mapping system for a memory device, such as a solid state data storage device. In one example, a data storage device may include a multi-level address mapping system. The multi-level address mapping system may be implemented completely independent of a host computer and a host computer operating system. Also, the multi-level mapping system may be stored to allow each level, or subsets of each level, to be re-written independently of the other levels or the other subsets.
    Type: Application
    Filed: May 11, 2010
    Publication date: November 17, 2011
    Applicant: SEAGATE TECHNOLOGY LLC
    Inventors: Timothy R. Feldman, Brett A. Cook, Jonathan W. Haines, Wayne H. Vinson
  • Publication number: 20110283071
    Abstract: In a digital system with a processor coupled to a paged memory system, the memory system may be dynamically configured using a memory compaction manager in order to allow portions of the memory to be placed in a low power mode. As applications are executed by the processor, program instructions are copied from a non-volatile memory coupled to the processor into pages of the paged memory system under control of an operating system. Pages in the paged memory system that are not being used by the processor are periodically identified. The paged memory system is compacted by copying pages that are being used by the processor from a second region of the paged memory into a first region of the paged memory. The second region may be placed in a low power mode when it contains no pages that are being used by the processor.
    Type: Application
    Filed: June 15, 2010
    Publication date: November 17, 2011
    Inventors: Satoshi Yokoya, Philippe Gentric, Alain Michel Breton, Steven Charles Goss, Steven Richard Jahnke
  • Publication number: 20110283049
    Abstract: Embodiments of the invention are directed to optimizing the selection of memory blocks for garbage collection to maximize the amount of memory freed by garbage collection operations. The systems and methods disclosed herein provide for the efficient selection of optimal or near-optimal garbage collection candidate blocks, with the most optimal selection defined as block(s) with the most invalid pages. In one embodiment, a controller classifies memory blocks into various invalid block pools by the amount of invalid pages each block contains. When garbage collection is performed, the controller selects a block from a non-empty pool of blocks with the highest minimum amount of invalid pages. The pools facilitate the optimal or near-optimal selection of garbage collection candidate blocks in an efficient manner and the data structure of the pools can be implemented with bitmasks, which take minimal space in memory.
    Type: Application
    Filed: May 12, 2010
    Publication date: November 17, 2011
    Applicant: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventors: HO-FAN KANG, ALAN CHINGTAO KAN
  • Publication number: 20110283040
    Abstract: An approach identifies an amount of high order bits used to store a memory address in a memory address field that is included in a memory. This approach calculates at least one minimum number of low order bits not used to store the address with the calculation being based on the identified amount of high order bits. The approach retrieves a data element from one of the identified minimum number of low order bits of the address field and also retrieves a second data element from one of the one of the identified minimum number of low order bits of the address field.
    Type: Application
    Filed: May 13, 2010
    Publication date: November 17, 2011
    Applicant: International Business Machines Corporation
    Inventors: Sundeep Chadha, Cathy May, Naresh Nayar, Randal Craig Swanberg
  • Publication number: 20110276741
    Abstract: For a virtual memory of a virtualized computer system in which a virtual page is mapped to a guest physical page which is backed by a machine page and in which a shadow page table entry directly maps the virtual page to the machine page, reverse mappings of guest physical pages are optimized by removing the reverse mappings of certain immutable guest physical pages. An immutable guest physical memory page is identified, and existing reverse mappings corresponding to the immutable guest physical page are removed. New reverse mappings corresponding to the identified immutable guest physical page are no longer added.
    Type: Application
    Filed: July 19, 2011
    Publication date: November 10, 2011
    Applicant: VMWARE, INC.
    Inventors: Pratap SUBRAHMANYAM, Garrett SMITH
  • Publication number: 20110276744
    Abstract: Described is using flash memory, RAM-based data structures and mechanisms to provide a flash store for caching data items (e.g., key-value pairs) in flash pages. A RAM-based index maps data items to flash pages, and a RAM-based write buffer maintains data items to be written to the flash store, e.g., when a full page can be written. A recycle mechanism makes used pages in the flash store available by destaging a data item to a hard disk or reinserting it into the write buffer, based on its access pattern. The flash store may be used in a data deduplication system, in which the data items comprise chunk-identifier, metadata pairs, in which each chunk-identifier corresponds to a hash of a chunk of data that indicates. The RAM and flash are accessed with the chunk-identifier (e.g., as a key) to determine whether a chunk is a new chunk or a duplicate.
    Type: Application
    Filed: May 5, 2010
    Publication date: November 10, 2011
    Applicant: Microsoft Corporation
    Inventors: Sudipta Sengupta, Biplob Kumar Debnath, Jin Li
  • Publication number: 20110271014
    Abstract: Illustrated is a system and method for identifying a memory page that is accessible via a common physical address, the common physical address simultaneously accessed by a hypervisor remapping the physical address to a machine address, and the physical address used as part of a DMA operation generated by an I/O device that is programmed by a VM. It also includes transmitting data associated with the memory page as part of a memory disaggregation regime, the memory disaggregation regime to include an allocation of an additional memory page, on a remote memory device, to which the data will be written. It further includes updating a P2M translation table associated with the hypervisor, and an IOMMU translation table associated with the I/O device, to reflect a mapping from the physical address to a machine address associated with the remote memory device and used to identify the additional memory page.
    Type: Application
    Filed: April 29, 2010
    Publication date: November 3, 2011
    Inventors: Yoshio Turner, Jose Renato Santos, Jichuan Chang
  • Patent number: 8046523
    Abstract: Provided are a flash memory management apparatus and method which divide blocks of a memory into data blocks and i-node blocks and respectively specify storage paths of data, which is stored in the data blocks, in the i-node blocks in order to easily access pieces of the data by searching the i-node blocks. The flash memory management apparatus includes a map search module searching for a map block located at a preset position of a memory among blocks that form the memory and extracting storage paths of one or more i-node blocks; a path search module searching for storage paths of data specified in the i-node blocks based on the extraction result; and a data management module accessing the data through a storage path of the data and performs a transaction on the data.
    Type: Grant
    Filed: February 13, 2007
    Date of Patent: October 25, 2011
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Joo-young Hwang, Kyu-ho Park, Seung-ho Lim
  • Publication number: 20110258364
    Abstract: A method includes forming a memory device through providing an array of non-volatile memory cells including one or more non-volatile memory cell(s) and an array of volatile memory cells including one or more volatile memory cell(s) on a substrate. The method also includes appropriately programming an address translation logic associated with the memory device through a set of registers associated therewith to enable configurable mapping of an address associated with a sector of the memory device to any memory address space location in a computing system associated with the memory device. The address translation logic is configured to enable translation of an external virtual address associated with the sector of the memory device to a physical address associated therewith.
    Type: Application
    Filed: April 20, 2010
    Publication date: October 20, 2011
    Applicant: CHIP MEMORY TECHNOLOGY, INC.,
    Inventor: Wingyu Leung
  • Publication number: 20110252218
    Abstract: A method for a storage controller to write a data block to one of a plurality of storage components is provided. The storage controller receives a write request from a host computer, and determines at least a portion of the data block includes a Logical Block Address (LBA) that is not currently mapped to a physical page of storage. The storage controller calculates availability for each storage component within the plurality of storage components, and selects the storage component with a highest calculated availability from the plurality of storage components. The storage controller next determines a next available physical page within the selected storage component. Finally, the storage controller writes the at least a portion of the data block including LBAs that are not currently mapped to a physical page of storage to the next available physical page.
    Type: Application
    Filed: October 5, 2010
    Publication date: October 13, 2011
    Applicant: DOT HILL SYSTEMS CORPORATION
    Inventor: Ian Robert Davies
  • Publication number: 20110246698
    Abstract: One embodiment includes a personal computer device comprising at least one machine configured to execute a primary user operating system and at least one appliance operating system independent from the primary user operating system. The personal computer device also including a system memory including a first portion of the system memory configured to be used by the primary user operating system; and a second portion of the system memory configured to be sequestered from the primary user operating system.
    Type: Application
    Filed: June 17, 2011
    Publication date: October 6, 2011
    Inventors: Ulhas Warrier, Ram Chary, Hani Elgebaly
  • Publication number: 20110238886
    Abstract: Systems and methods are provided for handling uncorrectable errors that may occur during garbage collection of an index page or block in non-volatile memory.
    Type: Application
    Filed: March 23, 2010
    Publication date: September 29, 2011
    Applicant: Apple Inc.
    Inventors: Daniel J. Post, Vadim Khmelnitsky
  • Patent number: 8028118
    Abstract: Embodiments of the invention provide methods and apparatus for increasing the number of page attributes specified by a page table while minimizing an increase in size of the page table. According to embodiments of the invention, attribute index bits may be included within a page table and may be used to determine page attributes stored within an attribute index. Additionally, embodiments of the invention provide a plurality of new page attributes.
    Type: Grant
    Filed: December 5, 2007
    Date of Patent: September 27, 2011
    Assignee: Internation Business Machines Corporation
    Inventors: Timothy Hume Heil, James Allen Rose, Andrew Henry Wottreng
  • Publication number: 20110231597
    Abstract: A data access method for accessing a non-volatile memory module is provided. The data access method includes configuring a plurality of logical addresses and grouping the logical addresses into logical blocks to map to the physical blocks of the non-volatile memory module, and a host system formats the logical addresses into one partition by using a file system and the partition stores at least one file and a file description block corresponding to the file. The data access method further includes searching an end mark corresponding to entry values of the file description block, setting logical addresses storing the end mark as default pattern addresses, and setting values stored in the logical addresses as default values corresponding to the default pattern addresses. Accordingly, the data access method can divide one partition into a write protect area and a writable area by updating data stored in the default pattern addresses.
    Type: Application
    Filed: May 4, 2010
    Publication date: September 22, 2011
    Applicant: PHISON ELECTRONICS CORP.
    Inventors: Ming-Fu Lai, Ying-Fu Chao, Kheng-Chong Tan
  • Publication number: 20110231631
    Abstract: An aspect of the invention relates to a method of managing data location of plural files in a storage system having a mixed volume which includes plural pages having a fixed page size, the pages belonging to different tiers. The method comprises mapping pages of different tiers to storage devices of different speeds in the storage system, the storage devices including at least a high speed storage device corresponding to a high tier page and a low speed storage device corresponding to a low tier page; and for each file that is a large file which is larger in size than the page size, performing sub-file tiered management on the large file to assign the large file among pages of different tiers according to access characteristics of different portions of the large file by matching the access characteristics of each portion of the large file with a corresponding tier of the assigned page of the mixed volume.
    Type: Application
    Filed: March 16, 2010
    Publication date: September 22, 2011
    Applicant: HITACHI, LTD.
    Inventors: Keiichi MATSUZAWA, Yasunori KANEDA
  • Publication number: 20110225342
    Abstract: A system described herein includes a receiver component that receives an indication that at least one page in virtual memory is free and the at least one page in virtual memory is classified as short-lived memory, wherein the virtual memory is accessible to at least one virtual machine executing on a computing device. The system also includes a cache updater component that dynamically updates a cache to include the at least one page, wherein the cache is accessible to the at least one virtual machine.
    Type: Application
    Filed: March 10, 2010
    Publication date: September 15, 2011
    Inventors: Parag Sharma, Ripal Babubhai Nathuji, Mehmet Iyigun, Yevgeniy M. Bak
  • Publication number: 20110225389
    Abstract: Memory address translation circuitry 14 performs a top down page table walk operation to translate a virtual memory address VA to a physical memory address PA using translation data stored in a hierarchy of translation tables 28, 32, 36, 38, 40, 42. A page size variable S is used to control the memory address translation circuitry 14 to operate with different sizes S of pages of physical memory addresses, pages of virtual memory address and translation tables. These different sizes may be all 4 kBs or all 64 kBs. The system may support multiple virtual machine execution environments. These virtual machine execution environments can independently set their own page size variable as can the page size of an associated hypervisor 62.
    Type: Application
    Filed: March 14, 2011
    Publication date: September 15, 2011
    Applicant: ARM LIMITED
    Inventor: Richard Roy Grisenthwaite
  • Patent number: 8019964
    Abstract: What is provided is an enhanced dynamic address translation facility. In one embodiment, a virtual address to be translated and an initial origin address of any one of a region first table, a region second table, a region third table, or a segment table are obtained. Based on the obtained initial origin address, a segment table entry is obtained which contains a format control and DAT protection fields. If the format control field is enabled, obtaining from the translation table entry a segment-frame absolute address of a large block of data in main storage. The segment-frame absolute address is combined with a page index portion and a byte index portion of the virtual address to form a translated address of the desired block of data. If the DAT protection field is not enabled, fetches and stores are permitted to the desired block of data addressed by the translated virtual address.
    Type: Grant
    Filed: January 11, 2008
    Date of Patent: September 13, 2011
    Assignee: International Buisness Machines Corporation
    Inventors: Dan F. Greiner, Charles W. Gainey, Jr., Lisa C. Heller, Damian L. Osisek, Erwin Pfeffer, Timothy J. Slegel, Charles F. Webb
  • Publication number: 20110219172
    Abstract: A non-volatile memory access method and system, and a non-volatile memory controller are provided for accessing a plurality of physical blocks in a non-volatile memory chip, and each physical block has a plurality of physical pages. The method includes determining whether there is enough space in a first physical block to write a plurality of specific physical pages when data stored in one of the specific physical pages are to be updated; and writing valid data and data to be updated into the first physical block when the first physical block has enough space to write the specific physical pages.
    Type: Application
    Filed: April 28, 2010
    Publication date: September 8, 2011
    Applicant: PHISON ELECTRONICS CORP.
    Inventor: Ming-Hui Lin
  • Publication number: 20110209028
    Abstract: Systems and methods are disclosed for remapping codewords for storage in a non-volatile memory, such as flash memory. In some embodiments, a controller that manages the non-volatile memory may prepare codeword using a suitable error correcting code. The controller can store a first portion of the codeword in a lower page of the non-volatile memory may store a second portion of the codeword in an upper page of the non-volatile memory. Because upper and lower pages may have different resiliencies to error-causing phenomena, remapping codewords in this manner may even out the bit error rates of the codewords (which would otherwise have a more bimodal distribution).
    Type: Application
    Filed: February 24, 2010
    Publication date: August 25, 2011
    Applicant: Apple Inc.
    Inventors: Daniel J. Post, Kenneth Herman
  • Publication number: 20110202740
    Abstract: Apparatus for data processing 2 is provided with processing circuitry 8 which operates in one or more secure modes 40 and one or more non-secure modes 42. When operating in a non-secure mode, one or more regions of the memory are inaccessible. A memory management unit 24 is responsive to page table data to manage accesses to the memory which includes a secure memory 22 and a non-secure memory 6. Secure page table data 36, 38 is used when operating in one of the secure modes. A page table entry within the hierarchy of page tables of the secure page table data includes a table security field 68, 72 indicating whether or not a further page table pointed to by that page table entry is stored within the secure memory 22 or the non-secure memory 6. If any of the page tables associated with a memory access are stored within the non-secure memory 6, then the memory access is marked with a table attribute bit NST indicating that the memory access should be treated as non-secure.
    Type: Application
    Filed: February 17, 2010
    Publication date: August 18, 2011
    Applicant: ARM Limited
    Inventor: Richard Roy Grisenthwaite
  • Publication number: 20110197100
    Abstract: A non-volatile redundant verifiable indication of data storage status is provided with respect to data storage operations conducted with respect to removable data storage media, and store the indication with an auxiliary non-volatile memory of the data storage media, such that the indication stays with the media. At least one state value indicating the status of the data storage operation is written to one page of the auxiliary non-volatile memory, and a redundancy check is provided with respect to at least the written state value of the one page of the auxiliary non-volatile memory; and the same state value is written to a second page of the auxiliary non-volatile memory, and a redundancy check is provided with respect to at least the written state value of the second page of the auxiliary non-volatile memory. The redundancy checks indicate the validity of the state values.
    Type: Application
    Filed: February 10, 2010
    Publication date: August 11, 2011
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: PAUL M. GRECO, GLEN A. JAQUETTE, PAUL J. SEGER
  • Publication number: 20110197035
    Abstract: A data storage device including a storing medium to shingle write and a controller to access the storing medium so that data is sequentially written on the storing medium using a mapping table based on Logical Block Address (LBA) included in a write command.
    Type: Application
    Filed: February 8, 2011
    Publication date: August 11, 2011
    Applicant: Samsung Electronics Co., Ltd
    Inventors: Se-wook Na, In Sik Ryu
  • Patent number: 7987314
    Abstract: It is possible to eliminate the defect that a long time is required for writing into a semiconductor memory card by resulting from the fact, with enlargement of its capacity, that the external data management size is different from the internal data management size in the semiconductor memory card. A partial physical block corresponding to the size managed externally is used regardless of the size of the physical block in a non-volatile memory device. Data are written in the partial physical block unit and an erase block is assured in the physical block unit, thereby enabling the write rate to be increased.
    Type: Grant
    Filed: August 26, 2004
    Date of Patent: July 26, 2011
    Assignee: Panasonic Corporation
    Inventor: Toshiyuki Honda
  • Publication number: 20110179490
    Abstract: A code injection attack detecting apparatus and method are provided. The code injection attack may be detected based on characteristics occurring when a malicious code injected by the code injection attack is executed. For example, the code injection attack detecting apparatus and method may detect that a code injection attack occurs when a buffer miss is detected, a page corresponding to an address is updated, a mode of the page corresponding to the address is in user mode, and/or the page corresponding to the page is inserted by an external input.
    Type: Application
    Filed: September 20, 2010
    Publication date: July 21, 2011
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Weon Il Jin, Hwan Joon Kim, Eun Ah Kim, Gyungho Lee
  • Publication number: 20110179249
    Abstract: The invention provides a method for handling data read out from a memory. In one embodiment, a controller corresponding to the memory comprises a ping-pong buffer. First, a first sector read time period required by the memory to read and output a data sector to the ping-pong buffer is calculated. A second sector read time period required by a host to read a data sector from the ping-pong buffer is calculated. A page switch time period required by the memory to switch a target read page is obtained. A total sector number is determined according to the first sector read time period, the second sector read time period, and the page switch time period. When the memory outputs data to the ping-pong buffer, a first buffer and a second buffer of the ping-pong buffer are switched to receive the data output by the memory according to the total sector number.
    Type: Application
    Filed: June 24, 2010
    Publication date: July 21, 2011
    Applicant: SILICON MOTION, INC.
    Inventor: Wei-Yi Hsiao
  • Patent number: 7984229
    Abstract: A cache design is described in which corresponding accesses to tag and information arrays are phased in time, and in which tags are retrieved (typically speculatively) from a tag array without benefit of an effective address calculation subsequently used for a corresponding retrieval from an information array. In some exploitations, such a design may allow cycle times (and throughput) of a memory subsystem to more closely match demands of some processor and computation system architectures. Our techniques seek to allow early (indeed speculative) retrieval from the tag array without delays that would otherwise be associated with calculation of an effective address eventually employed for a corresponding retrieval from the information array. Speculation can be resolved using the eventually calculated effective address or using separate functionality. In some embodiments, we use calculated effective addresses for way selection based on tags retrieved from the tag array.
    Type: Grant
    Filed: March 9, 2007
    Date of Patent: July 19, 2011
    Assignee: Freescale Semiconductor, Inc.
    Inventors: Ravindraraj Ramaraju, Ambica Ashok, David R. Bearden, Prashant U. Kenkare