Patents Examined by Matthew Kim
-
Patent number: 7543133Abstract: A computer system having low memory access latency. In one embodiment, the computer system includes a network and one or more processing nodes connected via the network, wherein each processing node includes a plurality of processors and a shared memory connected to each of the processors. The shared memory includes a cache. Each processor includes a scalar processing unit, a vector processing unit and means for operating the scalar processing unit independently of the vector processing unit. Processors on one node can load data directly from and store data directly to shared memory on another processing node via the network.Type: GrantFiled: August 18, 2003Date of Patent: June 2, 2009Assignee: Cray Inc.Inventor: Steven L. Scott
-
Patent number: 7500055Abstract: A system and method are disclosed for eliminating many of the transactional performance limitations in current digital media server systems by augmenting those existing systems with an adaptable cache. In a preferred embodiment, the adaptable cache is a compact storage device that can persist data and deliver it at an accelerated rate, as well as act as an intelligent controller and director of that data. Incorporating such an adaptable cache between existing storage devices and an external network interface of a media server, or at the network interface itself, significantly overcomes the transactional limitations of the storage devices, increasing performance and throughput for the overall digital media system. The adaptable cache of the present system and method may preferably be integrated directly into the storage and delivery pipelines, utilizing the native communications busses and protocols of those subsystems.Type: GrantFiled: June 27, 2003Date of Patent: March 3, 2009Assignee: Beach Unlimited LLCInventors: Richard T. Oesterreicher, Craig Murphy, Brian Eng, Brad Jackson
-
Patent number: 7451281Abstract: One embodiment provides a method of providing a user with information quicker than could be achieved by obtaining the information from a storage source, the method comprising receiving a request from a user for stored data accessible by the user, the stored data maintained on one or more disks accessible one at a time and wherein the disks have a latency between the time of the request and the time the information is actually available, and obtaining a first portion of the requested data from a source other than where the requested data is stored, the amount of the first portion spanning the latency period.Type: GrantFiled: June 5, 2003Date of Patent: November 11, 2008Assignee: Hewlett-Packard Development Company, L.P.Inventor: D. Mitchel Hanks
-
Patent number: 7409510Abstract: Techniques are provided for performing a copy operation. An instant virtual copy operation is issued from a first portion of data to a primary mirroring portion of data, wherein the primary mirroring portion of data corresponds to a secondary mirroring portion of data, and wherein the primary mirroring portion of data and the secondary mirroring portion of data are in a mirroring relationship. The mirroring relationship is transitioned to a duplex pending state in response to determining that the mirroring relationship is in a full duplex state. When the mirroring relationship is in a duplex pending state, each block of data involved in the instant virtual copy operation is transferred from the primary mirroring portion of data to the secondary mirroring portion of data.Type: GrantFiled: May 27, 2004Date of Patent: August 5, 2008Assignee: International Business Machines CorporationInventors: Sam Clark Werner, Gail Andrea Spear, Warren Keith Stanley, Robert Francis Bartfai, William Frank Micka
-
Patent number: 7404040Abstract: Packet data received by a network controller is parsed and at least a portion of a received packet is stored by the network controller in both a host memory of a system and also in a cache memory of the central processing unit of the system. Other embodiments are described and claimed.Type: GrantFiled: December 30, 2004Date of Patent: July 22, 2008Assignee: Intel CorporationInventors: John Anthony Ronciak, Christopher David Leech, Prafulla Shashikant Deuskar, Jesse C. Brandeburg, Patrick L. Connor
-
Patent number: 7401177Abstract: A data storage device includes a memory including a plurality of memory banks, a data storage processor that initially arranges data in the plurality of memory banks based on an access pattern including a plurality of desired pixels of data to be read simultaneously so as to store pixels of data between access candidates constituting the access pattern in an identical memory bank, and a data read and storage processor that reads the data initially arranged in the plurality of memory banks of the memory. The data read and storage processor reads a pixel of data from a memory bank, and stores the read pixel of data in a memory bank in which pixels of data in an adjacent range defined in ranges, which are defined by locations of the access candidates of the access pattern, based on a direction in which the access pattern is moved.Type: GrantFiled: March 30, 2005Date of Patent: July 15, 2008Assignee: Sony CorporationInventors: Naoki Takeda, Tetsujiro Kondo, Kenji Takahashi, Hiroshi Sato, Tsutomu Ichikawa, Hiroki Tetsukawa, Masaki Handa
-
Patent number: 7392339Abstract: A “partial PRECHARGE command” is used to precharge a fraction of the banks in a multi-bank DRAM. In a first implementation the command precharges one half of the banks. In a second implementation the command precharges one quarter of the banks. The power drawn by the upper or lower bank precharge on the eight bank DRAM is the same as the power drawn by an “all bank” precharge on a four bank DRAM, without requiring the precharge period to be extended.Type: GrantFiled: December 10, 2003Date of Patent: June 24, 2008Assignee: Intel CorporationInventor: Howard S. David
-
Patent number: 7389400Abstract: An apparatus and method selectively invalidate entries in an address translation cache instead of invalidating all, or nearly all, entries. One or more translation mode bits are provided in each entry in the address translation cache. These translation mode bits may be set according to the addressing mode used to create the cache entry. One or more “hint bits” are defined in an instruction that allow specifying which of the entries in the address translation cache are selectively preserved during an invalidation operation according to the value(s) of the translation mode bit(s). In the alternative, multiple instructions may be defined to preserve entries in the address translation cache that have specified addressing modes. In this manner, more intelligence is used to recognize that some entries in the address translation cache may be valid after a task or partition switch, and may therefore be retained, while other entries in the address translation cache are invalidated.Type: GrantFiled: December 15, 2005Date of Patent: June 17, 2008Assignee: International Business Machines CorporationInventors: Michael J. Corrigan, Paul LuVerne Godtland, Joaquin Hinojosa, Cathy May, Naresh Nayar, Edward John Silha
-
Patent number: 7389390Abstract: In a method of operating a microprocessor system provided with safety functions, which comprises two or more processor cores (1, 2) and periphery elements (5, 7) on a common chip carrier, to which the cores can have access for write or read operations, a distinction is made between algorithms for safety-critical functions and algorithms for comfort functions. Further, a microprocessor system appropriate for implementing the method, and the use of the same, has process cores connected to periphery elements (5,6,7,8,9,10) by way of bus systems (3, 4), and bus driver circuits (19) can transmit bus information from one bus to another with the provision of at least one address comparator (18).Type: GrantFiled: May 6, 2002Date of Patent: June 17, 2008Assignee: Continental Teves AG & Co. oHGInventor: Bernhard Giers
-
Patent number: 7383403Abstract: In one embodiment, a processor comprises a plurality of instruction buffers, an instruction cache coupled to supply instructions to the plurality of instruction buffers, and a cache miss unit coupled to the instruction cache. Each of the plurality of instruction buffers is configured to store instructions fetched from a respective thread of a plurality of threads. The cache miss unit is configured to monitor cache misses in the instruction cache. Particularly, the cache miss unit is configured to detect which of the plurality of threads experience a cache miss to a cache line. Responsive to a return of the cache line for storage in the instruction cache, the cache miss unit is configured to concurrently cause at least one instruction from the cache line to be stored in each of the plurality of instruction buffers that corresponds to one of the plurality of threads which experienced the cache miss to the cache line.Type: GrantFiled: June 30, 2004Date of Patent: June 3, 2008Assignee: Sun Microsystems, Inc.Inventors: Jama I. Barreh, Manish Shah, Robert T. Golla
-
Patent number: 7383387Abstract: Systems and techniques are described for using a memory cache of predetermined size to map values in a source file to a result file. In general, in one implementation, the technique includes determining values in the source file called for in the result file. The called-for values are ordered in a hierarchical order of usage in the result file from a first called-for value towards a last called-for value. The source file is sequentially parsed to locate called for values and the values are stored in memory cache locations. The called-for value with the lowest priority in the cache may be replaced by a newly found called-for value having a higher priority.Type: GrantFiled: December 13, 2002Date of Patent: June 3, 2008Assignee: SAP AGInventor: Dmitry Yankovsky
-
Patent number: 7383414Abstract: A method of managing memory mapped input/output (I/O) for a run-time environment is disclosed, in which opaque references are used for accessing information blocks included in files used in a dynamic run-time environment. The information block is stored in a shared memory space of pages that are each aligned on respective boundaries having addresses that are each some multiple of two raised to an integer power. The opaque reference used for the dynamic run-time environment includes at least an index, or page number reference into a page map of references to the pages of the shared memory space, and an offset value indicating an offset into the referenced page for the beginning of the storage of the information block. Control bits of the opaque reference indicate information such as the mapping mode, e.g., read-only, read-write, or private. Pages which are modified by a process may be written back to a backing store of the file based on control bits which indicate that a page has been modified.Type: GrantFiled: May 28, 2004Date of Patent: June 3, 2008Assignee: Oracle International CorporationInventors: Robert Lee, Harlan Sexton
-
Patent number: 7383415Abstract: In one embodiment, a processor comprising at least one translation lookaside buffer (TLB) and a control unit coupled to the TLB. The control unit is configured to track whether or not at least one update to the TLB is pending for at least one of a plurality of strands. Each strand comprises hardware to support a different thread of a plurality of concurrently activateable threads in the processor. The strands share the TLB, and the control unit is configured to delay a demap operation issued from one of the estrands responsive to the pending update, if any.Type: GrantFiled: September 9, 2005Date of Patent: June 3, 2008Assignee: Sun Microsystems, Inc.Inventors: Paul J. Jordan, Manish K. Shah, Gregory F. Grohoski
-
Patent number: 7380073Abstract: Computer-implemented systems and methods for handling access to one or more resources. Executable entities that are running substantially concurrently provide access requests to an operating system (OS). One or more traps of the OS are avoided to improve resource accessing performance through use of information stored in a shared locking mechanism. The shared locking mechanism indicates the overall state of the locking process, such as the number of processes waiting to retrieve data from a resource and/or whether a writer process is waiting to access the resource.Type: GrantFiled: November 26, 2003Date of Patent: May 27, 2008Assignee: SAS Institute Inc.Inventor: Charles S. Shorb
-
Patent number: 7376787Abstract: A disk array system includes storage devices, a storage device control unit, a connection unit being connected with the storage device control unit, channel control units, a shared memory, and a cache memory. Each channel control unit includes a first processor of converting file data, received through a local area network outside of the disk array system to which the channel control units belongs, into block data and requesting storage of the converted data in the plurality of storage devices and a second processor of transferring the block data to the storage devices through the connection unit and the storage device control unit in response to a request given from the first processor and is connected with the connection unit and the local area network.Type: GrantFiled: June 8, 2006Date of Patent: May 20, 2008Assignee: Hitachi, Ltd.Inventors: Hiroshi Ogasawara, Homare Kanie, Nobuyuki Saika, Yutaka Takata, Shinichi Nakayama
-
Patent number: 7376806Abstract: Data management systems, such as used in disk control units, employ memory entry lists to help keep track of user data. Improved performance of entry list maintenance is provided by the present invention. Much of the protocol employed to conduct such maintenance is preferably performed by hardware-based logic, thereby freeing other system resources to execute other processes. New entries to the memory list are only allowed at predetermined addresses and entries are updated by writing a predetermined data pattern to a previously allocated address. Optionally, improved error detection, such as a longitudinal redundancy check, may also be performed in an efficient manner during entry list maintenance to assure the integrity of the list.Type: GrantFiled: November 17, 2004Date of Patent: May 20, 2008Assignee: International Business Machines CorporationInventors: Ronald J. Chapman, Gary W. Batchelor, Michael T. Benhase, Kenneth W. Todd
-
Patent number: 7376802Abstract: The present invention relates to a memory arrangement having a controller and having at least one memory device. Data signals, control signals and address signals can be transferred between the controller and the memory device. The memory arrangement is designed in such a way that the data signals can be transferred via data signal lines between the controller and the memory device. The memory arrangement is furthermore designed in such a way that the control signals and the address signals can likewise be transferred via the data signal lines between the controller and the memory device.Type: GrantFiled: May 21, 2004Date of Patent: May 20, 2008Assignee: Infineon Technologies AGInventors: Georg Braun, Hermann Ruckerbauer, Maksim Kuzmenka, Siva Raghuram
-
Patent number: 7373466Abstract: A method and apparatus for filtering memory probe activity for writes in a distributed shared memory computer. In one embodiment, the method may include assigning an uncached directory state to a cache data block in response to evicting the cache data block. In another embodiment, the method may include assigning a remote directory state to a cache data block in response to evicting the cache data block and storing it in a remote cache. In a third embodiment, the method may include assigning a pairwise-shared directory state in response to a second processor node initiating a load operation to a cache data block in a modified cache state in a first processor node. In a fourth embodiment, the method may include assigning a migratory directory state in response to a processor node initiating a store operation to a cache data block in a pairwise-shared cache state.Type: GrantFiled: April 7, 2004Date of Patent: May 13, 2008Assignee: Advanced Micro Devices, Inc.Inventor: Patrick N. Conway
-
Patent number: 7373476Abstract: A discrepancy between a management range of a user on a management computer, and a management range of the user in a storage, is detected with respect to a volume held in the storage. Storage management information of the management computer stores a correspondence between an identifier of a volume group, and an identifier of the user. When a plurality of volumes are designated as a managed object of the user, the management computer references the storage management information, and obtains, from the storage, an identifier of the volume group to which the plurality of designated volumes belong. Next, the management computer references the storage management information, and determines whether or not the obtained volume group is in the management range of the same user.Type: GrantFiled: July 12, 2004Date of Patent: May 13, 2008Assignee: Hitachi, Ltd.Inventors: Masayasu Asano, Takayuki Nagai, Yasuyuki Mimatsu
-
Patent number: 7373463Abstract: An integrated circuit and an antifraud method implementing at least one operation involving at least one secret quantity, and functionally including upstream and downstream of the operator at least one source register and at least one destination register, respectively, and including means for loading a random number at least in the destination register.Type: GrantFiled: February 11, 2004Date of Patent: May 13, 2008Assignee: STMicroelectronics S.A.Inventors: Yannick Teglia, Pierre-Yvan Liardet