Patents Examined by Mehdi Namazi
  • Patent number: 7519787
    Abstract: A method for storage management is provided which displays the materials on which it is determined which of the thin provisioning volume or the logical unit (LU) is to be used for storage promotion. The method is executed in a computer system having one or more host computers, one or more storage subsystems, and a management computer. The storage subsystem includes a physical disk and a disk controller. The disk controller provides the host computer with the thin provisioning volume. The management computer obtains the allocated capacity from the disk controller and the host-recognized capacity from the host computer. By subtracting the obtained allocated capacity from the host-recognized capacity, the management computer calculates an improved capacity. By dividing the calculated improved capacity by the obtained host-recognized capacity, the management computer calculates an improvement ratio and displays the calculated improvement ratio.
    Type: Grant
    Filed: July 19, 2006
    Date of Patent: April 14, 2009
    Assignee: Hitachi, Ltd.
    Inventors: Kazuki Yamane, Takeshi Arisaka, Hiroyuki Tanaka
  • Patent number: 7519784
    Abstract: A computer implemented method, data processing system, and computer usable code are provided for reclaiming backup data storage space in memory. The process receives a selection to reclaim a set of memory locations associated with a set of backup copies of a selected file. The process searches a plurality of memory locations for the set of memory locations associated with the set of backup copies. The process then removes the data associated with the set of backup copies from the set of memory locations to form a set of reclaimed memory locations. The set of reclaimed memory locations are unoccupied by data associated with the set of backup copies of the selected file.
    Type: Grant
    Filed: March 31, 2006
    Date of Patent: April 14, 2009
    Assignee: Lenovo Singapore Pte. Ltd.
    Inventors: Philip Lee Childs, Lee Christopher Highsmith, Christopher Scott Long
  • Patent number: 7516280
    Abstract: A pulsed arbitration system has a partial-address coincidence detector with a partial-address collision flag as an output. An active global word line detector and disable pulse generator receives the partial-address collision flag as well as a decoded row address and an internal write pulse as an input, and generates a disable pulse for the interfering global word line of the colliding reading port.
    Type: Grant
    Filed: March 15, 2005
    Date of Patent: April 7, 2009
    Assignee: Cypress Semiconductor Corporation
    Inventor: Stefan-Cristian Rezeanu
  • Patent number: 7516288
    Abstract: A method and apparatus for method for transferring files between a primary storage system and a backup and restore system is described. The system generates collapsed extents which are used to specify data to be backed up to a backup and restore system. The backup and restore system backs up data based on the collapsed extents but records all extents included in the collapsed extents to enable the system to facilitate restoration of the data at a later point in time.
    Type: Grant
    Filed: December 19, 2007
    Date of Patent: April 7, 2009
    Assignee: EMC Corporation
    Inventors: Ananthan K. Pillai, Madhav Mutalik, Cara Garber, Ajay Shekhar
  • Patent number: 7512757
    Abstract: A computer system including a first storage system connected to a first host computer, a second storage system connected to a second host computer and a third storage system connected to the first and second storage systems. The second storage system sets transfer setting before an occurrence of a failure, the transfer setting being provided with a dedicated storage area to be used for transferring data to the third storage system by asynchronous copy in response to a failure at the first host computer. Before the start of data transfer between the second storage system and third storage system to be executed after an occurrence of the failure, the second storage system checks the dedicated storage area, data transfer line and transfer setting information, and if an abnormal state is detected, this abnormal state is reported to the host computer as information attached to the transfer setting.
    Type: Grant
    Filed: July 14, 2006
    Date of Patent: March 31, 2009
    Assignee: Hitachi, Ltd.
    Inventors: Yuri Hiraiwa, Nobuhiro Maki, Takeyuki Imazu
  • Patent number: 7509469
    Abstract: An asynchronously pipelined SDRAM has separate pipeline stages that are controlled by asynchronous signals. Rather than using a clock signal to synchronize data at each stage, an asynchronous signal is used to latch data at every stage. The asynchronous control signals are generated within the chip and are optimized to the different latency stages. Longer latency stages require larger delays elements, while shorter latency states require shorter delay elements. The data is synchronized to the clock at the end of the read data path before being read out of the chip. Because the data has been latched at each pipeline stage, it suffers from less skew than would be seen in a conventional wave pipeline architecture. Furthermore, since the stages are independent of the system clock, the read data path can be run at any CAS latency as long as the re-synchronizing output is built to support it.
    Type: Grant
    Filed: February 12, 2007
    Date of Patent: March 24, 2009
    Assignee: MOSAID Technologies Incorporated
    Inventor: Ian Mes
  • Patent number: 7506122
    Abstract: A first software entity occupies a portion of a linear address space of a second software entity and prevents the second software entity from accessing the memory of the first software entity. For example, in one embodiment of the invention, the first software entity is a virtual machine monitor (VMM), which supports a virtual machine (VM), the second software entity. The VMM sometimes directly executes guest instructions from the VM and, at other times, the VMM executes binary translated instructions derived from guest instructions. When executing binary translated instructions, the VMM uses memory segmentation to protect its memory. When directly executing guest instructions, the VMM may use either memory segmentation or a memory paging mechanism to protect its memory. When the memory paging mechanism is active during direct execution, the protection from the memory segmentation mechanism may be selectively deactivated to improve the efficiency of the virtual computer system.
    Type: Grant
    Filed: October 1, 2007
    Date of Patent: March 17, 2009
    Assignee: VMware, Inc.
    Inventors: Ole Agesen, Jeffrey W. Sheldon
  • Patent number: 7506095
    Abstract: A method for providing execute-in-place functionality in a data processing system. In one embodiment, the method includes determining whether a file system driver that manages a file system containing a file provides a file system direct-access interface. Execute-in-place functionality is used in response to determining both that the file system driver provides the file system direct-access interface and that a device driver provides a device direct-access interface. The file system direct-access interface is used to provide the execute-in-place functionality in response to determining that the file system is configured to enable execute-in-place functionality.
    Type: Grant
    Filed: April 4, 2006
    Date of Patent: March 17, 2009
    Assignee: International Business Machines Corporation
    Inventors: Carsten Otte, Ulrich Weigand
  • Patent number: 7506113
    Abstract: Shifts in the apparent charge stored on a floating gate (or other charge storing element) of a non-volatile memory cell can occur because of the coupling of an electric field based on the charge stored in adjacent floating gates (or other adjacent charge storing elements). To compensate for this coupling, the read or programming process for a given memory cell can take into account the programmed state of an adjacent memory cell. To determine whether compensation is needed, a process can be performed that includes sensing information about the programmed state of an adjacent memory cell (e.g., on an adjacent bit line or other location).
    Type: Grant
    Filed: July 20, 2006
    Date of Patent: March 17, 2009
    Assignee: SanDisk Corporation
    Inventor: Yan Li
  • Patent number: 7506096
    Abstract: A method of emulating segment addressing by a processor that includes initiating a Virtual Machine Monitor in a kernel mode; initiating a Virtual Machine in a user mode; forming a dynamically mapped table in Virtual Machine Monitor space, the dynamically mapped table corresponding to a table of segment descriptors of the Virtual Machine; populating the dynamically mapped table with descriptors that raise exceptions upon an attempt by the Virtual Machine to address a corresponding segment; and mapping a descriptor to the dynamically mapped table upon the Virtual Machine's use of that descriptor.
    Type: Grant
    Filed: October 4, 2006
    Date of Patent: March 17, 2009
    Assignee: Parallels Software International, Inc.
    Inventors: Alexey B. Koryakin, Nikolay N. Dobrovolskiy, Andrey A. Omelyanchuk, Alexander G. Tormasov, Serguei M. Beloussov, Stanislav S. Protassov
  • Patent number: 7506112
    Abstract: A bitmap manager creates a cached copy of a bitmap and a shadow copy of a bitmap. The contents of the shadow copy are examined as are the bitmap cache to determine when it is necessary to write bitmap data to persistent storage. Extra bits are set or left set in the bitmap shadow copy to minimize the frequency of having to write bitmap data to persistent storage.
    Type: Grant
    Filed: July 14, 2006
    Date of Patent: March 17, 2009
    Assignee: Sun Microsystems, Inc.
    Inventors: Wai C. Yim, Simon Crosland, Philip J. Newton
  • Patent number: 7493459
    Abstract: The present invention discloses a method of enhancing system performance applicable to a computer system capable of executing a snapshot process. The method includes counting a number of times the computer system has executed the snapshot process; and if the number is smaller than two, updating data made by the snapshot process executed by the computer system with data that are to be changed in the computer system, or updating data made by latest one of the snapshot processes executed by the computer system with the data that are to be changed in the computer system, instead of updating data made by all the snapshot processes with the data that are to be changed in the computer system, so as to save system resources and increase the system performance of the computer system.
    Type: Grant
    Filed: February 27, 2006
    Date of Patent: February 17, 2009
    Assignee: Inventec Corporation
    Inventor: Chih-Wei Chen
  • Patent number: 7493464
    Abstract: A sparse matrix paging system is provided that dynamically allocates memory resources on demand. In some cases, this is accomplished by dynamically allocating memory resources, preferably only after a page has been requested. Such a sparse matrix paging system may allow a platform with a large linear address space to more efficiently execute on a platform with a smaller linear address space. Preferably, the sparse matrix paging system only indexes those pages that are actually requested and used in the address space on main store pages or backing store pages. Further, the backing store is preferably not involved unless the total address space allocated by the operating system exceeds the available main store pages.
    Type: Grant
    Filed: January 11, 2007
    Date of Patent: February 17, 2009
    Assignee: Unisys Corporation
    Inventor: James W. Adcock
  • Patent number: 7490198
    Abstract: This invention provides a memory card (1) that is to be used as a storage medium in a host apparatus that can record and reproduce data. The memory card has a first memory (12-1), a second memory (12-2), a first switch (13) for changing over one memory to the other, and a second switch (14) for connecting and disconnecting an insertion/removal detecting terminal INS. The first and second switches work as a slide switch (6) provided on the housing is operated. The first switch has a contact for selecting the first memory, a contact for selecting the second memory, and a contact located between these two contacts, for selecting neither memory. The second switch connects the terminal INS to the ground while the first switch remains connected to the contact for selecting the first memory or the contact for selecting the second memory. The second switch opens the terminal INS while the first switch remains connected to the contact for selecting neither memory.
    Type: Grant
    Filed: October 7, 2003
    Date of Patent: February 10, 2009
    Assignee: Sony Corporation
    Inventors: Takumi Okaue, Shigeo Araki, Junko Sasaki, Kenichi Nakanishi
  • Patent number: 7487314
    Abstract: A first software entity occupies a portion of a linear address space of a second software entity and prevents the second software entity from accessing the memory of the first software entity. For example, in one embodiment of the invention, the first software entity is a virtual machine monitor (VMM), which supports a virtual machine (VM), the second software entity. The VMM sometimes directly executes guest instructions from the VM and, at other times, the VMM executes binary translated instructions derived from guest instructions. When executing binary translated instructions, the VMM uses memory segmentation to protect its memory. When directly executing guest instructions, the VMM may use either memory segmentation or a memory paging mechanism to protect its memory. When the memory paging mechanism is active during direct execution, the protection from the memory segmentation mechanism may be selectively deactivated to improve the efficiency of the virtual computer system.
    Type: Grant
    Filed: October 1, 2007
    Date of Patent: February 3, 2009
    Assignee: VMware, Inc.
    Inventors: Ole Agesen, Jeffrey W. Sheldon
  • Patent number: 7487313
    Abstract: A first software entity occupies a portion of a linear address space of a second software entity and prevents the second software entity from accessing the memory of the first software entity. For example, in one embodiment of the invention, the first software entity is a virtual machine monitor (VMM), which supports a virtual machine (VM), the second software entity. The VMM sometimes directly executes guest instructions from the VM and, at other times, the VMM executes binary translated instructions derived from guest instructions. When executing binary translated instructions, the VMM uses memory segmentation to protect its memory. When directly executing guest instructions, the VMM may use either memory segmentation or a memory paging mechanism to protect its memory. When the memory paging mechanism is active during direct execution, the protection from the memory segmentation mechanism may be selectively deactivated to improve the efficiency of the virtual computer system.
    Type: Grant
    Filed: October 1, 2007
    Date of Patent: February 3, 2009
    Assignee: VMware, Inc.
    Inventors: Ole Agesen, Jeffrey W. Sheldon
  • Patent number: 7478212
    Abstract: Techniques are described herein that may be used to de-fragment a first region of memory. For example, de-fragmenting may include identifying multiple accessed memory locations in the first memory region; and copying the accessed memory locations using the data mover logic in a continuous order to a second memory region.
    Type: Grant
    Filed: March 28, 2006
    Date of Patent: January 13, 2009
    Assignee: Intel Corporation
    Inventor: Andrew Grover
  • Patent number: 7478197
    Abstract: In a computer system with a memory hierarchy, when a high-level cache supplies a data copy to a low-level cache, the shared copy can be either volatile or non-volatile. When the data copy is later replaced from the low-level cache, if the data copy is non-volatile, it needs to be written back to the high-level cache; otherwise it can be simply flushed from the low-level cache. The high-level cache can employ a volatile-prediction mechanism that adaptively determines whether a volatile copy or a non-volatile copy should be supplied when the high-level cache needs to send data to the low-level cache. An exemplary volatile-prediction mechanism suggests use of a non-volatile copy if the cache line has been accessed consecutively by the low-level cache. Further, the low-level cache can employ a volatile-promotion mechanism that adaptively changes a data copy from volatile to non-volatile according to some promotion policy, or changes a data copy from non-volatile to volatile according to some demotion policy.
    Type: Grant
    Filed: July 18, 2006
    Date of Patent: January 13, 2009
    Assignee: International Business Machines Corporation
    Inventors: Xiaowei Shen, Man Cheuk Ng, Aaron Christoph Sawdey
  • Patent number: 7475203
    Abstract: Methods and systems are disclosed that relate to the nondestructive erasure of data in a data storage system. An exemplary method includes providing a program that can generate instructions, which may be interpreted by the back end of the data storage system, to overwrite data on a disk drive.
    Type: Grant
    Filed: March 28, 2006
    Date of Patent: January 6, 2009
    Assignee: EMC Corporation
    Inventors: Robert S. Petrillo, Jr., Derek Keith Richardson, James M. Whynot, Gilbert K. Alipui
  • Patent number: 7472219
    Abstract: A data-storage apparatus, a data-storage method and a recording/reproducing system are provided, which effectively use the time elapsing before data is transferred to be written in a recording medium, such as disc-seeking time and disc-rotation standby time, thereby to raise the speed of transferring data. A hybrid storage apparatus has two storage areas, i.e., a disc and a nonvolatile solid-state memory. The disc and the memory have a disc cache area, a system area, and a user area each. If data is transferred from the host apparatus, it is written into the cache area of the nonvolatile solid-state memory that can be accessed at high speed for the first super cluster. While the data being so written, the head is moved to a prescribed position. Any data coming after the head is moved to this position is written into the cache area.
    Type: Grant
    Filed: July 18, 2006
    Date of Patent: December 30, 2008
    Assignee: Sony Corporation
    Inventors: Tetsuya Tamura, Hajime Nishimura, Takeshi Sasa, Kazuya Suzuki