Predicting, Look-ahead Patents (Class 711/204)
  • Patent number: 6681178
    Abstract: When map data necessary for route guiding is pre-read from a map recording medium 101, and accumulated in a data buffer, the quantity or the ratio of the pre-read data with respect to data required for the entire distance is notified by an image or a voice to a user. Thus, the user can know a progress in the pre-reading operation of the map data.
    Type: Grant
    Filed: January 14, 2002
    Date of Patent: January 20, 2004
    Assignee: Mitsubishi Denki Kabushiki Kaisha
    Inventors: Koichi Inoue, Masatsugu Norimoto
  • Patent number: 6678815
    Abstract: An apparatus and method for reducing power consumption in a processor front end are provided. The processor includes an instruction cache, a TLB, and a branch predictor. For sequential code execution, the instruction cache is disabled unless the next instruction fetch will cross a cache line boundary, thus reducing unnecessary accesses to the instruction cache. The TLB is disabled unless the next instruction fetch will cross a page boundary, thus reducing unnecessary TLB look-ups. For code branching, the branch predictor is configured to include, for each target address, an indication of whether the target address is in the same page as the corresponding branch address. When a branch occurs so as to cause access to a given entry in the branch predictor, the TLB is disabled if the target address is in the same page as the branch address.
    Type: Grant
    Filed: June 27, 2000
    Date of Patent: January 13, 2004
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Edward T. Grochowski, Chih-Hung Chung
  • Patent number: 6675279
    Abstract: A behavioral memory mechanism for performing fetch prediction within a data processing system is disclosed. The data processing system includes a processor, a real memory, an address converter, a fetch prediction means, and an address translator. The real memory has multiple real address locations, and each of the real address locations is associated with a corresponding one of many virtual address locations. The virtual address locations are divided into two non-overlapping regions, namely, an architecturally visible virtual memory region and a behavioral virtual memory region. The address converter converts an effective address to an architecturally visible virtual address and a behavioral virtual address. The architecturally visible virtual address is utilized to access the architecturally visible virtual memory region of the virtual memory and the behavioral virtual address is utilized to access the behavioral virtual memory region of the virtual memory.
    Type: Grant
    Filed: October 16, 2001
    Date of Patent: January 6, 2004
    Assignee: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, William J. Starke
  • Patent number: 6675282
    Abstract: A system and method for storing only one copy of a data block that is shared by two or more processes is described. In one embodiment, a global/non-global predictor predicts whether a data block, specified by a linear address, is shared or not shared by two or more processes. If the data block is predicted to be non-shared, then a portion of the linear address referencing the data block is combined with a process identifier that is unique to form a global/non-global linear address. If the data block is predicted to be shared, then the global/non-global linear address is the linear address itself. If the prediction as to whether or not the data block is shared is incorrect, then the actual value of whether or not the data block is shared is used in computing a corrected global/non-global linear address.
    Type: Grant
    Filed: February 12, 2003
    Date of Patent: January 6, 2004
    Assignee: Intel Corporation
    Inventors: Herbert H. J. Hum, Stephan J. Jourdan, Deborrah Marr, Per H. Hammarlund
  • Patent number: 6675280
    Abstract: A method and apparatus for identifying virtual addresses in a cache line. To differentiate candidate virtual addresses from data values and random bit patterns, the upper bits of an address-sized word in the cache line are compared with the upper bits of the cache line's effective address. If the upper bits of the address-sized word match the upper bits of the effective address, the address-sized word is identified as a candidate virtual address.
    Type: Grant
    Filed: November 30, 2001
    Date of Patent: January 6, 2004
    Assignee: Intel Corporation
    Inventors: Robert N. Cooksey, Stephan J. Jourdan
  • Patent number: 6662274
    Abstract: A method for creating a mark stack for use in a moving garbage collection algorithm is described. The algorithm of the present invention creates a mark stack to implement a MGCA. The algorithm allows efficient use of cache memory prefetch features to reduce the required time to complete the mark stack and thus reduce the time required for garbage collection. Instructions are issued to prefetch data objects that will be examined in the future, so that by the time the scan pointer reaches the data object, the cache lines for the data object are already filled. At some point after the data object is prefetched, the address location of associated data objects is likewise prefetched. Finally, the associated data objects located at the previously fetched addresses are prefetched. This reduces garbage collection by continually supplying the garbage collector with a stream of preemptively prefetched data objects that require scanning.
    Type: Grant
    Filed: June 20, 2001
    Date of Patent: December 9, 2003
    Assignee: Intel Corporation
    Inventors: Sreenivas Subramoney, Richard L. Hudson
  • Patent number: 6662286
    Abstract: Memory corruption can be suppressed. When data stored in a random access area are read, the read data (physical block) are retrieved by a logic block number and newest data are read by referring to an incremental counter of data having that logic block number. When data are stored in the random access area, the incremental counter and the logic block number of data already stored in the random access area are referred and a physical block set to be unnecessary is set to a writer buffer, and then the data are written to this write buffer.
    Type: Grant
    Filed: July 1, 1998
    Date of Patent: December 9, 2003
    Assignee: Sony Corporation
    Inventors: Susumu Kusakabe, Masayuki Takada
  • Patent number: 6647464
    Abstract: A system and method are disclosed which provide a cache structure that allows early access to the cache structure's data. A cache design is disclosed that, in response to receiving a memory access request, begins an access to a cache level's data before a determination has been made as to whether a true hit has been achieved for such cache level. That is, a cache design is disclosed that enables cache data to be speculatively accessed before a determination is made as to whether a memory address required to satisfy a received memory access request is truly present in the cache. In a preferred embodiment, the cache is implemented to make a determination as to whether a memory address required to satisfy a received memory access request is truly present in the cache structure (i.e., whether a “true” cache hit is achieved). Although, such a determination is not made before the cache data begins to be accessed.
    Type: Grant
    Filed: February 18, 2000
    Date of Patent: November 11, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Reid James Riedlinger, Dean A. Mulla, Tom Grutkowski
  • Patent number: 6647491
    Abstract: The inventive mechanism provides fast profiling and effective trace selection. The inventive mechanism partitions the work between hardware and software. The hardware automatically detects which code is executed very frequently, e.g. which code is hot code. The hardware also maintains the branch history information. When the hardware determines that a section or block of code is hot code, the hardware sends a signal to the software. The software then forms the trace from the hot code, and uses the branch history information in making branch predictions.
    Type: Grant
    Filed: October 1, 2001
    Date of Patent: November 11, 2003
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Wei C. Hsu, Manuel Benitez
  • Patent number: 6647473
    Abstract: A snapshot system capable of capturing snapshots of multiple volumes wherein the snapshots are coordinated. A snapshot manager determines which volumes are to be involved in a snapshot operation, and issues a message to the file system for each volume involved, each message including information sufficient to identify the volumes involved in the snapshot operation. Each file system passes its respective message down to a coordinator mechanism. The coordinator mechanism coordinates the state of each of the volumes, such as by holding writes thereto, to put each volume into a quiescent state, and then enables the snapshot to be captured. When the snapshots are captured, a snapshot set will include snapshots that are coordinated across the multiple volumes. The coordinator mechanism releases any writes being held for the volumes involved.
    Type: Grant
    Filed: February 16, 2000
    Date of Patent: November 11, 2003
    Assignee: Microsoft Corporation
    Inventors: David P. Golds, Norbert P. Kusters, Brian D. Andrew, Daniel E. Lovinger, Supriya Wickrematillake
  • Patent number: 6643739
    Abstract: A way prediction scheme for a partitioned cache is based on the contents of instructions that use indirect addressing to access data items in memory. The contents of indirect-address instructions are directly available for use, without a memory address computation, and a prediction scheme based on this directly available information is particularly well suited for a pipeline architecture. Indirect addressing instructions also provide a higher-level abstraction of memory accesses, and are likely to be more indicative of relationships among data items, as compared to the absolute address of the data items. In a preferred embodiment, the base register that is contained in the indirect address instruction provides an index to a way-prediction table for an n-way associative cache.
    Type: Grant
    Filed: March 13, 2001
    Date of Patent: November 4, 2003
    Assignee: Koninklijke Philips Electronics N.V.
    Inventors: Jan-Willem Van De Waerdt, Paul Stravers
  • Patent number: 6625712
    Abstract: The present invention relates to a method of producing a memory management table that controls memories having a function to hold data at a time of power cut-off and manages identifier information of memory areas which are data storage destinations designated by a logical address issued by a host device. After an initializing process, the host device is immediately notified of canceling of a busy state, without production of the memory management table. Alternatively, only a part of the memory management table is produced and the host device is notified of the canceling of the busy state. After that, until the host device issues a process request, or when the host device is issuing a process request, an incomplete part of the memory management table is completed. Thus, the memory management table can be completed.
    Type: Grant
    Filed: March 9, 2001
    Date of Patent: September 23, 2003
    Assignee: Fujitsu Limited
    Inventors: Shogo Shibazaki, Takeshi Nagase
  • Patent number: 6625696
    Abstract: An apparatus and method for predicting quantities of data that will be requested by requesting devices capable of initiating requests for data from storage devices, in which prediction of quantities that will be requested are made based on past patterns of quantities of data requested.
    Type: Grant
    Filed: March 31, 2000
    Date of Patent: September 23, 2003
    Assignee: Intel Corporation
    Inventor: Theodore L. Willke, ll
  • Patent number: 6615337
    Abstract: In one illustrative embodiment, an apparatus for controlling a translation lookaside buffer is provided. The apparatus comprises a translation unit, a buffer, and a comparator. The translation unit is adapted to initiate a table walk process to convert a virtual memory address to a physical address. The buffer is adapted to store pending memory access requests previously processed by the translation unit. The comparator is adapted to determine if a physical address generated by the table walk process of the translation unit conflicts with a physical address of at least one of the pending memory access requests, and deliver a control signal to the translation unit for canceling the table walk process in response to determining that a conflict exists.
    Type: Grant
    Filed: August 9, 2001
    Date of Patent: September 2, 2003
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Michael Clark, Michael A. Filippo, Benjamin Sander, Greg Smaus
  • Patent number: 6609180
    Abstract: N_Port_Name information capable of distinctly identifying a host computer has seen set in a microprocessor 42 of a storage controller 40 prior to start-up of host computers 10, 20, 30; upon startup of the host computers 10, 20, 30, when the storage controller 40 receives a frame issued, then the microprocessor 42 operates to perform comparison for determining whether the N_Port_Name information stored in the frame has been already set in the microprocessor 42 and registered to the N_Port_Name list within a control table maintained. When such comparison results in match, then continue execution of processing based on the frame instruction; if comparison results in failure of match, then reject any request.
    Type: Grant
    Filed: March 13, 2001
    Date of Patent: August 19, 2003
    Assignee: Hitachi, Ltd.
    Inventors: Akemi Sanada, Toshio Nakano, Hidehiko Iwasaki, Masahiko Sato, Kenji Muraoka, Kenichi Takamoto, Masaaki Kobayashi
  • Patent number: 6594730
    Abstract: An embodiment of the present invention provides a memory controller that includes a plurality of transaction queues and an arbiter, a prefetch cache in communication with the arbiter, and a prefetch queue in communication with the prefetch cache. The prefetch queue also may be provided in communication with each of the transaction queues for the purpose of determining whether the transaction queues are operating in a congested state.
    Type: Grant
    Filed: August 3, 1999
    Date of Patent: July 15, 2003
    Assignee: Intel Corporation
    Inventors: Herbert H J Hum, Andrew V. Anderson
  • Patent number: 6581140
    Abstract: A system provides a method and apparatus for accessing information in a cache in a data processing system. The system optimizes a speed-critical path within the cache system by using a prediction scheme. The prediction scheme subdivides the address range of address bits and compares the portions separately. A comparison of a critical portion of the address, along with a prediction bit, are used to generate a prediction.
    Type: Grant
    Filed: July 3, 2000
    Date of Patent: June 17, 2003
    Assignee: Motorola, Inc.
    Inventors: Steven C. Sullivan, Michael D. Snyder, Magnus K. Bruce
  • Patent number: 6581151
    Abstract: A speculative store forwarding apparatus in a pipelined microprocessor that supports paged virtual memory is disclosed. The apparatus includes comparators that compare only the physical page index of load data with the physical page indexes of store data pending in store buffers to detect a potential storehit. If the indexes match, forwarding logic speculatively forwards the newest storehit data based on the index compare. The index compare is performed in parallel with a TLB lookup of the virtual page number of the load data, which produces a load physical page address. The load physical page address is compared with the store data physical page addresses to verify that the speculatively forwarded storehit data is in the same page as the load data. If the physical page addresses mismatch, the apparatus stalls the pipeline in order to correct the erroneous speculative forward. The microprocessor stalls until the correct data is fetched.
    Type: Grant
    Filed: July 18, 2001
    Date of Patent: June 17, 2003
    Assignee: IP-First, LLC
    Inventors: G. Glenn Henry, Rodney E. Hooker
  • Patent number: 6578130
    Abstract: A method and apparatus for prefetching data in computer systems that tracks the number of prefetches currently active and compares that number to a preset maximum number of allowable prefetches to determine if additional prefetches should currently be performed. By limiting the number of prefetches being performed at any given time, the use of system resources for prefetching can be controlled, and thus system performance can be optimized.
    Type: Grant
    Filed: October 18, 2001
    Date of Patent: June 10, 2003
    Assignee: International Business Machines Corporation
    Inventors: Brian David Barrick, Michael John Mayfield, Brian Patrick Hanley
  • Publication number: 20030101326
    Abstract: An instruction-set-aware method for reducing transitions on an irredundant address bus comprises receiving a first address for communication to a memory on an irredundant address bus. The method retrieves an instruction from a memory location indicated by the first address, transmits the instruction on a data bus, and determines a category of the instruction. The method predicts a second address based, at least in part, on the first address, the instruction, and the category of the instruction.
    Type: Application
    Filed: January 14, 2003
    Publication date: May 29, 2003
    Applicant: Fujitsu Limited
    Inventors: Farzan Fallah, Yazdan Aghaghiri, Massoud Pedram
  • Patent number: 6571318
    Abstract: A processor is described which includes a stride detect table. The stride detect table includes one or more entries, each entry used to track a potential stride pattern. Additionally, each entry includes a confidence counter. The confidence counter may be incremented each time another address in the pattern is detected, and thus may be indicative of the strength of the pattern (e.g., the likelihood of the pattern repeating). At a first threshold of the confidence counter, prefetching of the next address in the pattern (the most recent address plus the stride) may be initiated. At a second, greater threshold, a more aggressive prefetching may be initiated (e.g. the most recent address plus twice the stride). In some implementations, the prefetch mechanism including the stride detect table may replace a prefetch buffer and prefetch logic in the memory controller.
    Type: Grant
    Filed: March 2, 2001
    Date of Patent: May 27, 2003
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Benjamin T. Sander, William A. Hughes, Sridhar P. Subramanian, Teik-Chung Tan
  • Patent number: 6564313
    Abstract: The invention contemplates a system and method for efficient instruction prefetching based on the termination of loops. A computer system may be contemplated herein, wherein the computer system may include a semiconductor memory device, a cache memory device and a prefetch unit. The system may also include a memory bus to couple the semiconductor memory device to the prefetch unit. The system may further include a circuit coupled to the memory bus. The circuit may detect a branch instruction within the sequence of instructions, such that the branch instruction may target a loop construct. A circuit may also be contemplated herein. The circuit may include a detector coupled to detect a loop within a sequence of instructions. The circuit may also include one or more counting devices coupled to the detector. A first counting device may count a number of clock cycles associated with a set of instructions within a loop construct.
    Type: Grant
    Filed: December 20, 2001
    Date of Patent: May 13, 2003
    Assignee: LSI Logic Corporation
    Inventor: Asheesh Kashyap
  • Patent number: 6560690
    Abstract: A system and method for storing only one copy of a data block that is shared by two or more processes is described. In one embodiment, a global/non-global predictor predicts whether a data block, specified by a linear address, is shared or not shared by two or more processes. If the data block is predicted to be non-shared, then a portion of the linear address referencing the data block is combined with a process identifier that is unique to form a global/non-global linear address. If the data block is predicted to be shared, then the global/non-global linear address is the linear address itself. If the prediction as to whether or not the data block is shared is incorrect, then the actual value of whether or not the data block is shared is used in computing a corrected global/non-global linear address.
    Type: Grant
    Filed: December 29, 2000
    Date of Patent: May 6, 2003
    Assignee: Intel Corporation
    Inventors: Herbert H. J. Hum, Stephan J. Jourdan, Deborrah Marr, Per H. Hammarlund
  • Patent number: 6560689
    Abstract: A prevalidation content addressable memory, CAM, is used to pre-decode a virtual address region extension and enable it for use by a translation look-aside buffer, TLB. The prevalidation CAM removes the region extensions stored in region registers from a serial TLB look-up path.
    Type: Grant
    Filed: March 31, 2000
    Date of Patent: May 6, 2003
    Assignee: Intel Corporation
    Inventors: Gregory S. Mathews, Gary Hammond
  • Patent number: 6553476
    Abstract: A storage apparatus has input means for inputting an input/output execution time prediction request from an external system and determining means for predicting the execution time of the input/output request in response to the input/output execution time prediction request. The storage apparatus predicts the execution time of the input/output request and provides a response to the external system in response to the input/output execution time prediction request input from the external system.
    Type: Grant
    Filed: February 9, 1998
    Date of Patent: April 22, 2003
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yasushi Ayaki, Junichi Komeno, Toshiharu Koshino, Yoshitaka Yaguchi, Tsukasa Yoshiura, Yuji Nagaishi
  • Patent number: 6553483
    Abstract: In an enhanced virtual renaming scheme within a processor, multiple logical registers may be mapped to a single physical register. A value cache determines whether a new value generated pursuant to program instructions matches values associated with previously executed instructions. If so, the logical register associated with the newly executed instruction shares the physical register. Also, deadlock preventatives measures may be integrated into a register allocation unit in a manner that “steals” a physical register from a younger executed instruction when a value from an older instruction is generated in a processor core.
    Type: Grant
    Filed: November 29, 1999
    Date of Patent: April 22, 2003
    Assignee: Intel Corporation
    Inventors: Stephan J. Jourdan, Ronny Ronen, Michael Bekerman
  • Publication number: 20030074540
    Abstract: A behavioral memory mechanism for performing fetch prediction within a data processing system is disclosed. The data processing system includes a processor, a real memory, an address converter, a fetch prediction means, and an address translator. The real memory has multiple real address locations, and each of the real address locations is associated with a corresponding one of many virtual address locations. The virtual address locations are divided into two non-overlapping regions, namely, an architecturally visible virtual memory region and a behavioral virtual memory region. The address converter converts an effective address to an architecturally visible virtual address and a behavioral virtual address. The architecturally visible virtual address is utilized to access the architecturally visible virtual memory region of the virtual memory and the behavioral virtual address is utilized to access the behavioral virtual memory region of the virtual memory.
    Type: Application
    Filed: October 16, 2001
    Publication date: April 17, 2003
    Applicant: International Business Machines Corporation
    Inventors: Ravi K. Arimilli, William J. Starke
  • Patent number: 6542891
    Abstract: The present invention is a computer implemented method and system for minimizing contention for a shared resource between a plurality of processes executing computer instructions that are associated with said shared resource. The method analyzes at least one of said processes of computer instructions and determines whether at least one of said processes modifies said shared resource. If at least one of said processes does not modify said shared resource, the method controls access to said shared resource by at least one said process.
    Type: Grant
    Filed: January 29, 1999
    Date of Patent: April 1, 2003
    Assignee: International Business Machines Corporation
    Inventors: Larry Wayne Loen, John Matthew Santosuosso
  • Publication number: 20030056062
    Abstract: A preemptive write back controller is described. The present invention is well suited for a cache, main memory, or other temporarily private data storage that implements a write back strategy. The preemptive write back controller includes a list of the lines, pages, words, memory locations, or sets of memory locations potentially requiring a write back (i.e., those which previously experienced a write operation into them) in a write back cache, write back main memory, or other write back temporarily private data storage. Thus, the preemptive write back controller can initiate or force a preemptive cleaning of these lines, pages, words, memory locations, or sets of memory locations.
    Type: Application
    Filed: September 14, 2001
    Publication date: March 20, 2003
    Inventor: Manohar K. Prabhu
  • Patent number: 6535962
    Abstract: A data processing system includes a processor having a first level cache and a prefetch engine. Coupled to the processor are a second level cache and a third level cache and a system memory. Prefetching of cache lines is performed into each of the first, second, and third level caches by the prefetch engine. Prefetch requests from the prefetch engine to the second and third level caches is performed over a private prefetch request bus, which is separate from the bus system that transfers data from the various cache levels to the processor.
    Type: Grant
    Filed: November 8, 1999
    Date of Patent: March 18, 2003
    Assignee: International Business Machines Corporation
    Inventors: Michael John Mayfield, Francis Patrick O'Connell, David Scott Ray
  • Patent number: 6523096
    Abstract: N_Port_Name information capable of distinctly identifying a host computer has seen set in a microprocessor 42 of a storage controller 40 prior to start-up of host computers 10, 20, 30; upon startup of the host computers 10, 20, 30, when the storage controller 40 receives a frame issued, then the microprocessor 42 operates to perform comparison for determining whether the N_Port_Name information stored in the frame has been already set in the microprocessor 42 and registered to the N_Port_Name list within a control table maintained. When such comparison results in match, then continue execution of processing based on the frame instruction; if comparison results in failure of match, then reject any request.
    Type: Grant
    Filed: March 13, 2001
    Date of Patent: February 18, 2003
    Assignee: Hitachi, Ltd.
    Inventors: Akemi Sanada, Toshio Nakano, Hidehiko Iwasaki, Masahiko Sato, Kenji Muraoka, Kenichi Takamoto, Masaaki Kobayashi
  • Patent number: 6496917
    Abstract: A multiprocessor system includes a plurality of central processing units (CPUs) connected to one another by a system bus. Each CPU includes a cache controller to communicate with its cache, and a primary memory controller to communicate with its primary memory. When there is a cache miss in a CPU, the cache controller routes an address request for primary memory directly to the primary memory via the CPU as a speculative request without access the system bus, and also issues the address request to the system bus to facilitate data coherency. The speculative request is queued in the primary memory controller, which in turn retrieves speculative data from a specified primary memory address. The CPU monitors the system bus for a subsequent transaction that requests the specified data in the primary memory. If the subsequent transaction requesting the specified data is a read transaction that corresponds to the speculative address request, the speculative request is validated and becomes non-speculative.
    Type: Grant
    Filed: February 7, 2000
    Date of Patent: December 17, 2002
    Assignee: Sun Microsystems, Inc.
    Inventors: Rajasekhar Cherabuddi, Kevin B. Normoyle, Brian J. McGee, Meera Kasinathan, Anup Sharma, Sutikshan Bhutani
  • Patent number: 6480942
    Abstract: A synchronized FIFO memory circuit includes a random access memory and a FIFO controller having a decreased critical-path length. The synchronized FIFO circuit comprises a first counter for counting a number representing a Read Pointer, a second counter for counting a number representing a Write Pointer, a third counter for holding and managing the number of remaining empty entries in the FIFO memory circuit, and comparison means for comparing the value of the third counter with a constant value. Write Ready, Read Ready, Full, Empty, Almost Full and Almost Empty which are status signals of the FIFO memory circuit are produced at a high speed by comparison carried out by the comparison means without using a subtractor.
    Type: Grant
    Filed: May 27, 1999
    Date of Patent: November 12, 2002
    Assignee: Sony Corporation
    Inventor: Koji Hirairi
  • Patent number: 6470438
    Abstract: In one embodiment of the invention, each data value which is provided to a non-tagged, n-way cache is hashed with a number of bits which correspond to the data value, thereby producing a hashed data value. Preferably, the bits which are hashed with the data value are address bits. The hashed data value is then written into one or more ways of the cache using index hashing. A cache hit signal is produced using index hashing and voting. In a cache where data values assume only a few different values, or in a cache where many data values which are written to the cache tend to assume a small number of data values, data hashing helps to reduce false hits by insuring that the same data values will produce different hashed data values when the same data values are associated with different addresses. In another embodiment of the invention, data values which are provided to a non-tagged, n-way cache are written into the cache in a non-count form.
    Type: Grant
    Filed: February 22, 2000
    Date of Patent: October 22, 2002
    Assignee: Hewlett-Packard Company
    Inventor: James E McCormick, Jr.
  • Publication number: 20020144063
    Abstract: A shared-memory system includes processing modules communicating with each other through a network. Each of the processing modules includes a processor, a cache, and a memory unit that is locally accessible by the processor and remotely accessible via the network by all other processors. A home directory records states and locations of data blocks in the memory unit. A prediction facility that contains reference history information of the data blocks predicts a next requester of a number of the data blocks that have been referenced recently. The next requester is informed by the prediction facility of the current owner of the data block. As a result, the next requester can issue a request to the current owner directly without an additional hop through the home directory.
    Type: Application
    Filed: March 29, 2001
    Publication date: October 3, 2002
    Inventors: Jih-Kwon Peir, Konrad Lai
  • Patent number: 6453411
    Abstract: The inventive mechanism has a run-time optimization system (RTOS) embedded in hardware. When the code is first moved into Icache, a threshold value is set into a counter associated with the instruction or instruction bundle of the particular cache line of the Icache. Each time the instruction or instruction bundle is executed and retired, the counter is decremented by one. When the counter reaches zero, a trap is generated to inform that the code is hot. A trace selector will form a trace starting from the hot instruction (or instruction bundle) from the Icache line. The Icache maintains branch history information for the instructions in each cache line which is used to determine whether a branch should be predicted as taken or fall through. After the trace is formed, it is optimized and stored into a trace memory portion of the physical memory. The mapping between the original code of the trace and the optimized trace in the trace memory is maintained in a mapping table.
    Type: Grant
    Filed: February 18, 1999
    Date of Patent: September 17, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Wei C. Hsu, Manuel Benitez
  • Patent number: 6442658
    Abstract: The present invention comprises a system for delivering an interactive multimedia work from a storage device, for example a hard disk drive, a CD-ROM drive, a network server, etc. to a playback device, for example a personal computer, in a manner that provides improved performance regardless of the playback sequence selected by a user. In one embodiment of the present invention, for each segment of an interactive multimedia work, a probability factor is assigned to each possible alternative succeeding segment. In addition a retrieval and delivery time cost factor is also assigned to each possible succeeding segment. In one embodiment of the invention, the time cost factor for each resource is assigned a fixed value. In another embodiment, the time cost factor is recalculated periodically to reflect changes in location and status of resources. The probability and time cost factor for each possible succeeding segment are combined to produce a relative priority ranking.
    Type: Grant
    Filed: May 1, 2000
    Date of Patent: August 27, 2002
    Assignee: Macromedia, Inc.
    Inventors: V. Bruce Hunt, Ken Day, Harry R. Chesley
  • Patent number: 6438656
    Abstract: A method of operating a multi-level memory hierarchy of a computer system and apparatus embodying the method, wherein instructions issue having an explicit prefetch request directly from an instruction sequence unit to a prefetch unit of the processing unit. The invention applies to values that are either operand data or instructions. In a preferred embodiment, two prefetch units are used, the first prefetch unit being hardware independent and dynamically monitoring one or more active streams associated with operations carried out by a core of the processing unit, and the second prefetch unit being aware of the lower level storage subsystem and sending with the prefetch request an indication that a prefetch value is to be loaded into a lower level cache of the processing unit.
    Type: Grant
    Filed: July 30, 1999
    Date of Patent: August 20, 2002
    Assignee: International Business Machines Corporation
    Inventors: Ravi Kumar Arimilli, Leo James Clark, John Steven Dodson, Guy Lynn Guthrie, William John Starke
  • Patent number: 6438627
    Abstract: An apparatus is disclosed for predicting and making available in advance certain information, namely the address signals from an expansion bus, so as to relax the timing requirement of the burst transfer cycle. A decoder responds to the control signals of the expansion bus to detect the start and the end of a burst transfer cycle. The decoder controls a counter, which stores the initial address signals of the expansion bus at the start of the burst transfer cycle and predicts the initial address signals by incrementing the address signals during the burst transfer cycle. A multiplexer couples either the predicted address signal to the multiplexer output during the burst transfer cycle or the address signal of the EISA bus to the multiplexer output when the computer system is not performing the EMB burst transfer cycle. In another aspect of the present invention, the low order address signal of the bus is predicted using a second counter.
    Type: Grant
    Filed: May 12, 1998
    Date of Patent: August 20, 2002
    Assignee: Compaq Information Technologies Group, L.P.
    Inventors: Brian S. Hausauer, Siamak Tavallaei
  • Patent number: 6437531
    Abstract: There is provided an external watchdog timer 30 for detecting a temporary runaway of the microcomputer body 20 and resetting it according to a pulse output of the microcomputer body 20. Further, an internal watchdog timer 70 operated independently from a control section of CPU of the microcomputer body 20 is incorporated as a program. When a pulse signal is not outputted from the microcomputer body 20 exceeding a predetermined period of time, the external watchdog timer 30 is disabled, and impedance of the output terminal 32 is made high and the backup circuit 60 is set in an operating condition, so that the windowpane can be opened and closed by a manual operation switch 11.
    Type: Grant
    Filed: June 29, 2000
    Date of Patent: August 20, 2002
    Assignee: Yazaki Corporation
    Inventor: Yoshihiro Kawamura
  • Patent number: 6427207
    Abstract: An apparatus is presented for expediting the execution of dependent micro instructions in a pipeline microprocessor having design characteristics-complexity, power, and timing—that are not significantly impacted by the number of stages in the microprocessor's pipeline. In contrast to conventional result distribution schemes where an intermediate result is distributed to multiple pipeline stages, the present invention provides a cache for storage of multiple intermediate results. The cache is accessed by a dependent micro instruction to retrieve required operands. The apparatus includes a result forwarding cache, result update logic, and operand configuration logic. The result forwarding cache stores the intermediate results. The result update logic receives the intermediate results as they are generated and enters the intermediate results into the result forwarding cache.
    Type: Grant
    Filed: July 20, 2001
    Date of Patent: July 30, 2002
    Assignee: I.P. First L.L.C.
    Inventors: Gerard M. Col, G. Glenn Henry
  • Patent number: 6418525
    Abstract: A method and apparatus for storing and utilizing set prediction information regarding which set of a set-associative memory will be accessed for enhancing performance of the set-associative memory and reducing power consumption. The set prediction information is stored in various locations including a branch target buffer, instruction cache and operand history table to decrease latency for accesses to set-associative instruction and data caches.
    Type: Grant
    Filed: January 29, 1999
    Date of Patent: July 9, 2002
    Assignee: International Business Machines Corporation
    Inventors: Mark J. Charney, Philip G. Emma, Daniel A. Prener, Thomas R. Puzak
  • Patent number: 6418530
    Abstract: The inventive mechanism provides fast profiling and effective trace selection. The inventive mechanism partitions the work between hardware and software. The hardware automatically detects which code is executed very frequently, e.g. which code is hot code. The hardware also maintains the branch history information. When the hardware determines that a section or block of code is hot code, the hardware sends a signal to the software. The software then forms the trace from the hot code, and uses the branch history information in making branch predictions.
    Type: Grant
    Filed: February 18, 1999
    Date of Patent: July 9, 2002
    Assignee: Hewlett-Packard Company
    Inventors: Wei C. Hsu, Manuel Benitez
  • Publication number: 20020087802
    Abstract: A processor includes a cache that has a lines to store data. The processor also includes prefetch bits each of which is associated with one of the cache lines. The processor further includes a prefetch manager that calculates prefetch data as if a cache miss occurred whenever a cache request results in a cache hit to a cache line that is associated with a prefetch bit that is set. In a further embodiment, the prefetch manager prefetches data into the cache based on the distance between cache misses for an instruction.
    Type: Application
    Filed: March 30, 2001
    Publication date: July 4, 2002
    Inventors: Khalid Al-Dajani, Mohammad A. Abdallah
  • Patent number: 6415380
    Abstract: A processor having a data providing unit comprises a first table for holding the address of a store instruction indexed by a data address at which data value is stored by the store instruction, a second table for holding the address of the store instruction indexed by a subsequent load instruction, a data storing unit for holding data indexed by the address of the store instruction, and a data providing controller. The data providing controller retrieves the load instruction and the store instruction, both instructions looking up a same data address from the first and second tables, and retrieves data which are employed by the store instruction corresponding to the load instruction from the data storing unit, based on the address of the load instruction, and provides the data for the processor as predictive data to which access by the load instruction is predicted.
    Type: Grant
    Filed: January 27, 1999
    Date of Patent: July 2, 2002
    Assignee: Kabushiki Kaisha Toshiba
    Inventor: Toshinori Sato
  • Patent number: 6412059
    Abstract: In order to immediately respond to an access request from a processor with reduced power consumption, a requested information is read out from a cache memory 31 or information buffers 421 to 421 and supplied to the processor when comparators 343 to 3410 output hit signals.
    Type: Grant
    Filed: October 1, 1999
    Date of Patent: June 25, 2002
    Assignee: NEC Corporation
    Inventor: Hideki Matsuyama
  • Patent number: 6412046
    Abstract: A method and apparatus automatically and easily verifies a cache line prefetch mechanism. The verification method includes a strict definition of which cache lines should be prefetched and which cache lines should not. The method also emphasizes unusual operating conditions. For example, by exercising boundary conditions, the method by stresses situations in which a microprocessor or chip is likely to produce errors. The method can verify prefetch without having to access or view any internal signals or buses inside the chip. The method can be adopted in any system-level verification methodology in simulation, emulation, or actual hardware. The method can be used in a system-level test set up along with a chip-level test set up without requiring knowledge of the internal state of the chip. In this case, checking is done at the chip boundary. The method is automated and performs strict checks on overprefetch, underprefetch, and the relative order in which fetch and prefetches must occur.
    Type: Grant
    Filed: May 1, 2000
    Date of Patent: June 25, 2002
    Assignee: Hewlett Packard Company
    Inventors: Debendra Das Sharma, Kevin Hauck, Daniel F. Li
  • Patent number: 6385703
    Abstract: A computer system that includes a host processor (HP), a system memory (SM), and a host bridge coupled to the HP and SM is provided. The host bridge asserts a first read request to the SM and, prior to availability of snoop results in connection with the first read request, the host bridge asserts a following second read request to the SM.
    Type: Grant
    Filed: December 3, 1998
    Date of Patent: May 7, 2002
    Assignee: Intel Corporation
    Inventors: Narendra S. Khandekar, David D. Lent, Zohar Bogin
  • Publication number: 20020032845
    Abstract: ? A method and apparatus for sequentially generating a set of addresses, defined over a plurality of indices, for a multi-dimensional array stored in a memory for the condition where at least one of the address indices is fixed, is performed by simple addition, OR-ing and AND-ing. An accumulator or counter initially holds an arbitrary binary value composed of a set of binary indices corresponding to the address indices. This binary value is logically OR-ed with a first mask value having binary indices selected in value in relation to the fixed address indices. The resultant is logically AND-ed with a second mask value having binary indices selected in value in relation to the fixed address indices, and this operation produces a first address of the set. The same resultant is incremented and the incremented value is delivered to the accumulator for the cycle to be repeated.
    Type: Application
    Filed: October 28, 1998
    Publication date: March 14, 2002
    Inventors: DOUGLAS ROBERT MCGREGOR, WILLIAM PAUL COCKSHOTT
  • Patent number: 6353879
    Abstract: A data processing system 2 is provided with a processor core 4 that issues virtual addresses VA that are translated to mapped addresses MA by an address translation circuit 6 based upon a predicted address mapping. The mapped address MA is used for a memory access within a memory system 8. The mapped address MA starts to be used before a mapping validity circuit 6 has determined whether or not the predicted translation was valid. Accordingly, if the predicted address translation turns out to be invalid, then the memory access is aborted. The state of the processor core is preserved either by stretching the processor clock signal or by continuing the processor clock signal and waiting the processor 4. The memory system 8 then restarts the memory access with the correct translated address.
    Type: Grant
    Filed: February 19, 1999
    Date of Patent: March 5, 2002
    Assignee: Arm Limited
    Inventors: Peter Guy Middleton, David Michael Bull