With Shared Cache (epo) Patents (Class 711/E12.038)
  • Publication number: 20100281221
    Abstract: A method, circuit arrangement, and design structure for prefetching data for responding to a memory request, in a shared memory computing system of the type that includes a plurality of nodes, is provided. Prefetching data comprises, receiving, in response to a first memory request by a first node, presence data for a memory region associated with the first memory request from a second node that sources data requested by the first memory request, and selectively prefetching at least one cache line from the memory region based on the received presence data. Responding to a memory request comprises tracking presence data associated with memory regions associated with cached cache lines in the first node, and, in response to a memory request by a second node, forwarding the tracked presence data for a memory region associated with the memory request to the second node.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 4, 2010
    Applicant: International Business Machines Corporation
    Inventors: Jason F. Cantin, Steven R. Kunkel
  • Publication number: 20100281220
    Abstract: A method, circuit arrangement, and design structure utilize a lock prediction data structure to control ownership of a cache line in a shared memory computing system. In a first node among the plurality of nodes, lock prediction data in a hardware-based lock prediction data structure for a cache line associated with a first memory request is updated in response to that first memory request, wherein at least a portion of the lock prediction data is predictive of whether the cache line is associated with a release operation. The lock prediction data is then accessed in response to a second memory request associated with the cache line and issued by a second node and a determination is made as to whether to transfer ownership of the cache line from the first node to the second node based at least in part on the accessed lock prediction data.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 4, 2010
    Applicant: International Business Machines Corporation
    Inventors: Jason F. Cantin, Steven R. Kunkel
  • Patent number: 7827361
    Abstract: An embodiment of a method of controlling access to a computing resource within a shared computing environment begins with a first step of determining a plurality of controller functions for a plurality of operating ranges for workloads accessing the computing resource. Each of the controller functions comprises a mathematical operator which takes an input and provides an output. The method continues by iteratively performing second through fifth steps. In the second step, the method measures performance parameters for the workloads to determine a performance parameter vector for the workloads. In the third step, the method compares the performance parameter vector to a reference performance parameter vector to determine an error parameter. In the fourth step, the method applies a particular controller function selected from the plurality of controller functions to the error parameter to determine a target throughput for each of the workloads.
    Type: Grant
    Filed: October 21, 2004
    Date of Patent: November 2, 2010
    Assignee: Hewlett-Packard Development Company, L.P.
    Inventors: Magnus Karlsson, Christos Karamanolis
  • Publication number: 20100274911
    Abstract: Managing information related to an entity. The method includes storing a cache of data particular to an entity. The cache of data is related to the entity and controlled by the entity. The data in the cache of data is organized into a number of distinct subject matters. Access is provided to a portion of the data to a third party. Access is provided based on the third party being a service provider providing services related to the one of the distinct subject matters. Access is provided while restricting access to other portions of the data to the third party. Additional data is received from the third party. The additional data is added from the third party to the cache of data and the additional data is organized into the one of the distinct subject matters such that the additional data is also related to and controlled by the entity.
    Type: Application
    Filed: April 27, 2009
    Publication date: October 28, 2010
    Inventor: Larry R. Laycock
  • Publication number: 20100275209
    Abstract: A scalable locking system is described herein that allows processors to access shared data with reduced cache contention to increase parallelism and scalability. The system provides a reader/writer lock implementation that uses randomization and spends extra space to spread possible contention over multiple cache lines. The system avoids updates to a single shared location in acquiring/releasing a read lock by spreading the lock count over multiple sub-counts in multiple cache lines, and hashing thread identifiers to those cache lines. Carefully crafted invariants allow the use of partially lock-free code in the common path of acquisition and release of a read lock. A careful protocol allows the system to reuse space allocated for a read lock for subsequent locking to avoid frequent reallocating of read lock data structures. The system also provides fairness for write-locking threads and uses object pooling techniques to make reduce costs associated with the lock data structures.
    Type: Application
    Filed: April 28, 2009
    Publication date: October 28, 2010
    Applicant: Microsoft Corporation
    Inventor: David L. Detlefs
  • Publication number: 20100268889
    Abstract: Techniques a generally described for creating a compiler determined map for the allocation of memory space within a cache. An example computing system is disclosed having a multicore processor with a plurality of processor cores. At least one cache may be accessible to at least two of the plurality of processor cores. A compiler determined map may separately allocate a memory space to threads of execution processed by the processor cores.
    Type: Application
    Filed: April 21, 2009
    Publication date: October 21, 2010
    Inventors: Thomas Martin Conte, Andrew Wolfe
  • Publication number: 20100268881
    Abstract: A method to associate a storage policy with a cache region is disclosed. In this method, a cache region associated with an application is created. The application runs on virtual machines, and where a first virtual machine has a local memory cache that is private to the first virtual machine. The first virtual machine additionally has a shared memory cache that is shared by the first virtual machine and a second virtual machine. Additionally, the cache region is associated with a storage policy. Here, the storage policy specifies that a first copy of an object to be stored in the cache region is to be stored in the local memory cache and that a second copy of the object to be stored in the cache region is to be stored in the shared memory cache.
    Type: Application
    Filed: July 7, 2010
    Publication date: October 21, 2010
    Inventors: Galin Galchev, Frank Kilian, Oliver Luik, Dirk Marwinski, Petio G. Petev
  • Publication number: 20100268891
    Abstract: Techniques are generally described for a multi-core processor with a plurality of processor cores. At least one cache is accessible to at least two of the plurality of processor cores. The multi-core processor can be configured for separately allocating a memory space within the cache to the individual processor cores accessing the cache.
    Type: Application
    Filed: April 21, 2009
    Publication date: October 21, 2010
    Inventors: Thomas Martin Conte, Andrew Wolfe
  • Patent number: 7814270
    Abstract: A storage system is arranged to speed up the operation and easily duplicate data without the capacity of the cache memory being so large even if lots of host computers are connected with the storage system. This storage system includes channel adapters, disk drives, disk adapters, and network switches. Further, the front side cache memories connected with the channel adapters and the back side cache memories connected with the disk adapters are provided as two layered cache system. When a request for writing data is given to the storage system by the host computer, the data is written in both the front side cache memory and the back side cache memory. The write data is duplicated by placing the write data in one of the front side cache memories and one of the back side cache memories or two of the back side cache memories.
    Type: Grant
    Filed: June 15, 2007
    Date of Patent: October 12, 2010
    Assignee: Hitachi, Ltd.
    Inventor: Kentaro Shimada
  • Publication number: 20100257319
    Abstract: A cache system can change a cache capacity in a unit of a plurality of divided memory areas. Cache access to at least one memory area among the divided memory areas is restricted in the debug mode. Access history information concerning access in the debug mode is stored in the memory area to which the cache access is restricted.
    Type: Application
    Filed: March 5, 2010
    Publication date: October 7, 2010
    Applicant: KABUSHIKI KAISHA TOSHIBA
    Inventor: Hiroyuki USUI
  • Patent number: 7809891
    Abstract: A system and method for managing a data cache in a central processing unit (CPU) of a database system. A method executed by a system includes the processing steps of adding an ID of a page p into a page holder queue of the data cache, executing a memory barrier store-load operation on the CPU, and looking-up page p in the data cache based on the ID of the page p in the page holder queue. The method further includes the steps of, if page p is found, accessing the page p from the data cache, and adding the ID of the page p into a least-recently-used queue.
    Type: Grant
    Filed: April 9, 2007
    Date of Patent: October 5, 2010
    Assignee: SAP AG
    Inventor: Ivan Schreter
  • Publication number: 20100241809
    Abstract: A processor according to an exemplary of the invention includes a first initialization unit which reads a first program for checking a reliability of the processor into a cache memory and executes the first program when the processor is started up, and a second initialization unit which reads a second program for checking a reliability of the cache memory into a predetermined memory area and executes the second program when the second initialization unit receives a notification indicating the completion of the establishment of a communication path between the predetermined memory area and the processor from another processor which exists in a partition in which the processor is added.
    Type: Application
    Filed: February 2, 2010
    Publication date: September 23, 2010
    Inventor: DAISUKE AGEISHI
  • Publication number: 20100235581
    Abstract: A method of caching data in a global cache distributed amongst a plurality of computing devices, comprising providing a global cache for caching data accessible to interconnected client devices, where each client contributes a portion of its main memory to the global cache. Each client also maintains an ordering of data that it has in its cache portion. When a remote reference for a cached datum is made, both the supplying client and the requesting client adjust their orderings to reflect the fact that the number of copies of the requested datum now likely exist in the global cache.
    Type: Application
    Filed: March 10, 2009
    Publication date: September 16, 2010
    Inventors: Eric A. Anderson, Christopher E. Hoover, Xiaozhou Li, Allstair Veitch
  • Publication number: 20100223431
    Abstract: In a multi-core processor of a shared-memory type, deterioration in the data processing capability caused by competitions of memory accesses from a plurality of processors is suppressed effectively. In a memory access controlling system for controlling accesses to a cache memory in a data read-ahead process when the multi-core processor of a shared-memory type performs a task including a data read-ahead thread for executing data read-ahead and a parallel execution thread for performing an execution process in parallel with the data read-ahead, the system includes a data read-ahead controller which controls an interval between data read-ahead processes in the data read-ahead thread adaptive to a data flow which varies corresponding to an input value of the parallel process in the parallel execution thread. By controlling the interval between the data read-ahead processes, competitions of memory accesses in the multi-core processor are suppressed.
    Type: Application
    Filed: February 4, 2008
    Publication date: September 2, 2010
    Inventor: Kosuke Nishihara
  • Publication number: 20100185817
    Abstract: This disclosure describes, generally, methods and systems for implementing transcendent page caching. The method includes establishing a plurality of virtual machines on a physical machine. Each of the plurality of virtual machines includes a private cache, and a portion of each of the private caches is used to create a shared cache maintained by a hypervisor. The method further includes delaying the removal of the at least one of stored memory pages, storing the at least one of stored memory pages in the shared cache, and requesting, by one of the plurality of virtual machines, the at least one of the stored memory pages from the shared cache. Further, the method includes determining that the at least one of the stored memory pages is stored in the shared cache, and transferring the at least one of the stored shared memory pages to the one of the plurality of virtual machines.
    Type: Application
    Filed: January 20, 2009
    Publication date: July 22, 2010
    Applicant: Oracle International Corporation
    Inventor: Daniel Magenheimer
  • Publication number: 20100185818
    Abstract: A resource pool managing system and a signal processing method are provided in embodiments of the present disclosure. On the basis of the resource pool, all filters on links share one set of operation resources and cached resources. The embodiment can be adapted to support different application scenarios with unequal carrier rates while mixing modes are supported and the application scenarios with unequal carrier filter orders. The embodiment also supports each stage of filters of the supporting mode-mixing system to share one set of multiply-adding and cached resources to unify the dispatching of resources in one resource pool and maximize the utilization of resources, and supports the parameterized configuration of the links forward-backward stages, link parameter, carrier rate, and so on.
    Type: Application
    Filed: January 6, 2010
    Publication date: July 22, 2010
    Inventor: Lanping Sheng
  • Patent number: 7752399
    Abstract: Disclosed is an information processing apparatus that has an update procedure semaphore, and a generation management information as management information of a shared data area that requires exclusion control. The generation management information specifies one item of generation information of the shared data area. As generation information provided for every generation, the apparatus has a reference-count measuring counter, a semaphore for updating generation information, a pointer for pointing to old generation information, and a pointer for pointing to the substance of the shared data area. In a case where the latest shared data is updated, a duplicate of the latest shared data area is created, new generation information corresponding to the duplicated shared data area is generated, data in the duplicated shared data area is updated and generation information, which corresponds to the shared data area after the updating thereof, is registered as the latest generation information.
    Type: Grant
    Filed: March 24, 2006
    Date of Patent: July 6, 2010
    Assignee: NEC Corporation
    Inventor: Hiroaki Oyama
  • Publication number: 20100169577
    Abstract: In order to control an access request to the cache shared between a plurality of threads, a storage unit for storing a flag provided in association with each of the threads is included. If the threads enter the execution of an atomic instruction, a defined value is written to the flags stored in the storage unit. Furthermore, if the atomic instruction is completed, a defined value different from the above defined value is written, thereby displaying whether or not the threads are executing the atomic instruction. If an access request is issued from a certain thread, it is judged whether or not a thread different from the certain thread is executing the atomic instruction by referencing the flag values in the storage unit. If it is judged that another thread is executing the atomic instruction, the access request is kept standby. This makes it possible to realize the exclusive control processing necessary for processing the atomic instruction according to simple configuration.
    Type: Application
    Filed: December 17, 2009
    Publication date: July 1, 2010
    Applicant: Fujitsu Limited
    Inventor: Naohiro Kiyota
  • Publication number: 20100169576
    Abstract: A method is to implement a Scale Invariant Feature Transform algorithm in a shared memory multiprocessing system. The method comprises building differences of Gaussian (DoG) images for an input image, detecting keypoints in the DoG images; assigning orientations to the keypoints and computing keypoints descriptors and performing matrix operations. In the method, building differences of Gaussian (DoG) images for an input image and detecting keypoints in the DoG images are executed for all scales of the input image in parallel. And, orientation assignment and keypoints descriptions computation are executed for all octaves of the input image in paralle.
    Type: Application
    Filed: December 31, 2008
    Publication date: July 1, 2010
    Inventor: Yurong Chen
  • Publication number: 20100161976
    Abstract: A system and associated method for handling a cross-platform system call with a shared page cache in a hybrid system. The hybrid system comprises a first computer system and a second computer system. Each computer system has a respective copy of the shared page cache, and validates an entry in the respective copy of the shared page cache for pages available in the respective computer system. The cross-platform system call is invoked by a first kernel to provide a kernel service to a user application in the first computer system. The cross-platform system call has a parameter referring to raw data in the first computer system. The cross-platform system call is converted to be executed in the second computer system and the raw data is copied to the second computer system only when a page fault for the raw data occurs while executing the cross-platform system call.
    Type: Application
    Filed: December 23, 2008
    Publication date: June 24, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Utz Bacher
  • Publication number: 20100153649
    Abstract: Embodiments of shared cache memories for multi-core processors are presented. In one embodiment, a cache memory comprises a group of sampling cache sets and a controller to determine a number of misses that occur in the group of sampling cache sets. The controller is operable to determine a victim cache line for a cache set based at least in part on the number of misses.
    Type: Application
    Filed: December 15, 2008
    Publication date: June 17, 2010
    Inventors: Wenlong Li, Yu Chen, Changkyu Kim, Christopher J. Hughes, Yen-Kuang Chen
  • Publication number: 20100138612
    Abstract: A system for implementing cache sharing includes a main control unit and a plurality of service processing units, and further includes a shared cache unit respectively connected with the main control unit and the service processing units for implementing high-speed data interaction among the service processing units. A method for cache sharing is also provided. In embodiments of the present invention, based on a reliable high-speed bus, a high-speed shared cache is provided. A mutual exclusion scheme is provided in the shared cache to ensure data consistency, which not only implements high-speed data sharing but also dramatically improves system performance.
    Type: Application
    Filed: February 1, 2010
    Publication date: June 3, 2010
    Applicant: HANGZHOU H3C TECHNOLOGIES CO., LTD.
    Inventor: Zhanming WEI
  • Publication number: 20100131716
    Abstract: This invention describes an apparatus, computer architecture, memory structure, memory control, and cache memory operation method for multi-core processor. A logic core shares requests when faced with immediate cache memory units having low yield or deadly performance. The core mounts (multiple) cache unit(s) that might already be in use by other logic cores. Selected cache memory units serve multiple logic cores with the same contents. The shared cache memory unit(s) serves all the mounting cores with cache search, hit, miss, and write back functions. The method recovers a logic core whose cache memory block is not operational by sharing cache memory blocks which might already engage other logic cores. The method is used to improve reliability and performance of the remaining system.
    Type: Application
    Filed: November 21, 2008
    Publication date: May 27, 2010
    Applicant: International Business Machines Corporation
    Inventors: Karl J. Duvalsaint, Daeik Kim, Moon J. Kim
  • Publication number: 20100122033
    Abstract: An integrated memory system with a spiral cache responds to requests for values at a first external interface coupled to a particular storage location in the cache in a time period determined by the proximity of the requested values to the particular storage location. The cache supports multiple outstanding in-flight requests directed to the same address using an issue table that tracks multiple outstanding requests and control logic that applies the multiple requests to the same address in the order received by the cache memory. The cache also includes a backing store request table that tracks push-back write operations issued from the cache memory when the cache memory is full and a new value is provided from the external interface, and the control logic to prevent multiple copies of the same value from being loaded into the cache or a copy being loaded before a pending push-back has been completed.
    Type: Application
    Filed: December 17, 2009
    Publication date: May 13, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Fadi H. Gebara, Jeremy D. Schaub, Volker Strumpen
  • Publication number: 20100122026
    Abstract: Techniques are provided for using an intermediate cache to provide some of the items involved in a scan operation, while other items involved in the scan operation are provided from primary storage. Techniques are also provided for determining whether to service an I/O request for an item with a copy of the item that resides in the intermediate cache based on factors such as a) an identity of the user for whom the I/O request was submitted, b) an identity of a service that submitted the I/O request, c) an indication of a consumer group to which the I/O request maps, d) whether the I/O request is associated with an offloaded filter provided by the database server to the storage system, or e) whether the intermediate cache is overloaded. Techniques are also provided for determining whether to store items in an intermediate cache in response to the items being retrieved, based on logical characteristics associated with the requests that retrieve the items.
    Type: Application
    Filed: January 21, 2010
    Publication date: May 13, 2010
    Applicant: ORACLE INTERNATIONAL CORPORATION
    Inventors: Kothanda Umamageswaran, Juan R. Loaiza, Umesh Panchaksharaiah, Alexander Tsukerman, Timothy L. Shetler, Bharat C.V. Baddepudi, Boris Erlikhman, Kiran B. Goyal, Nilesh Choudhury, Susy Fan, Poojan Kumar, Selcuk Aya, Sue-Kyoung Lee
  • Publication number: 20100100686
    Abstract: In such a configuration that a port unit is provided which takes a form being shared among threads and has a plurality of entries for holding access requests, and the access requests for a cache shared by a plurality of threads being executed at the same time are controlled using the port unit, the access request issued from each tread is registered on a port section of the port unit which is assigned to the tread, thereby controlling the port unit to be divided for use in accordance with the thread configuration. In selecting the access request, the access requests are selected for each thread based on the specified priority control from among the access requests issued from the threads held in the port unit, thereafter a final access request is selected in accordance with a thread selection signal from among those selected access requests.
    Type: Application
    Filed: December 16, 2009
    Publication date: April 22, 2010
    Applicant: FUJITSU LIMITED
    Inventor: Naohiro Kiyota
  • Publication number: 20100095068
    Abstract: With a view to reducing the congestion of a pipeline for cache memory access in, for example, a multi-core system, a cache memory control device includes: a determination unit for determining whether or not a command provided from, for example, each core is to access cache memory during the execution of the command; and a path switch unit for putting a command determined as accessing the cache memory in pipeline processing, and outputting a command determined as not accessing the cache memory directly to an external unit without putting the command in the pipeline processing.
    Type: Application
    Filed: December 11, 2009
    Publication date: April 15, 2010
    Applicant: FUJITSU LIMITED
    Inventors: Koken Shimizuno, Naoya Ishimura
  • Publication number: 20100083120
    Abstract: There is provided a storage system including one or more LDEVs, one or more processors, a local memory or memories corresponding to the processor or processors, and a shared memory, which is shared by the processors, wherein control information on I/O processing or application processing is stored in the shared memory, and the processor caches a part of the control information in different storage areas on a type-by-type basis in the local memory or memories corresponding to the processor or processors in referring to the control information stored in the shared memory.
    Type: Application
    Filed: December 18, 2008
    Publication date: April 1, 2010
    Inventors: Shintaro Ito, Norio Shimozono
  • Publication number: 20100077150
    Abstract: An advanced processor comprises a plurality of multithreaded processor cores each having a data cache and instruction cache. A data switch interconnect is coupled to each of the processor cores and configured to pass information among the processor cores. A messaging network is coupled to each of the processor cores and a plurality of communication ports. In one aspect of an embodiment of the invention, the data switch interconnect is coupled to each of the processor cores by its respective data cache, and the messaging network is coupled to each of the processor cores by its respective message station. Advantages of the invention include the ability to provide high bandwidth communications between computer systems and memory in an efficient and cost-effective manner.
    Type: Application
    Filed: November 30, 2009
    Publication date: March 25, 2010
    Applicant: RMI CORPORATION
    Inventor: David T. HASS
  • Publication number: 20100049921
    Abstract: Systems and methods for distributed shared caching in a clustered file system, wherein coordination between the distributed caches, their coherency and concurrency management, are all done based on the granularity of data segments rather than files. As a consequence, this new caching system and method provides enhanced performance in an environment of intensive access patterns to shared files.
    Type: Application
    Filed: August 25, 2008
    Publication date: February 25, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Lior Aronovich, Ron Asher
  • Publication number: 20100011167
    Abstract: A multi-core processor providing heterogeneous processor cores and a shared cache is presented.
    Type: Application
    Filed: July 6, 2009
    Publication date: January 14, 2010
    Inventors: Frank T. Hady, Mason B. Cabot, John Beck, Mark B. Rosenbluth
  • Publication number: 20100005244
    Abstract: A device and method for storing data and/or instructions in a computer system having at least two processing units and at least one first memory or memory area for data and/or instructions, wherein a second memory or memory area is included in the device, the device being designed as a cache memory system and equipped with at least two separate ports, and the at least two processing units accessing via these ports the same or different memory cells of the second memory or memory area, the data and/or instructions from the first memory system being stored temporarily in blocks.
    Type: Application
    Filed: July 25, 2006
    Publication date: January 7, 2010
    Inventors: Reinhard Weiberle, Bernd Mueller, Eberhard Boehl, Yorck Von Collani, Rainer Gmehlich
  • Publication number: 20090327616
    Abstract: A system and method for selectively transmitting probe commands and reducing network traffic. Directory entries are maintained to filter probe command and response traffic for certain coherent transactions. Rather than storing directory entries in a dedicated directory storage, directory entries may be stored in designated locations of a shared cache memory subsystem, such as an L3 cache. Directory entries are stored within the shared cache memory subsystem to provide indications of lines (or blocks) that may be cached in exclusive-modified, owned, shared, shared-one, or invalid coherency states. The absence of a directory entry for a particular line may imply that the line is not cached anywhere in a computing system.
    Type: Application
    Filed: June 30, 2008
    Publication date: December 31, 2009
    Inventors: Patrick Conway, Kevin Michael Lepak
  • Publication number: 20090313436
    Abstract: A cache region can be created in a cache in response to receiving a cache region creation request from an application. A storage request from the application can identify the cache region and one or more objects to be stored in the cache region. Those objects can be stored in the cache region in response to receiving the storage request.
    Type: Application
    Filed: May 14, 2009
    Publication date: December 17, 2009
    Applicant: Microsoft Corporation
    Inventors: Muralidhar Krishnaprasad, Anil Nori, Subramanian Muralidhar, Sudhir Mohan Jorwekar, Lakshmi Suresh Goduguluru
  • Publication number: 20090292881
    Abstract: A method and a system for processor nodes configurable to operate in various distributed shared memory topologies. The processor node may be coupled to a first local memory. The first processor node may include a first local arbiter, which may be configured to perform one or more of a memory node decode or a coherency check on the first local memory. The processor node may also include a switch coupled to the first local arbiter for enabling and/or disabling the first local arbiter. Thus one or more processor nodes may be coupled together in various distributed shared memory configurations, depending on the configuration of their respective switches.
    Type: Application
    Filed: May 20, 2008
    Publication date: November 26, 2009
    Inventors: Ramaswamy Sivaramakrishnan, Stephen E. Phillips
  • Publication number: 20090271572
    Abstract: In one embodiment, the present invention includes a method for determining if a state of data is indicative of a first class of data, re-classifying the data from a second class to the first class based on the determination, and moving the data to a first portion of a shared cache associated with a first requester unit based on the re-classification. Other embodiments are described and claimed.
    Type: Application
    Filed: July 2, 2009
    Publication date: October 29, 2009
    Inventors: Christopher J. Hughes, Yen-Kuang Chen
  • Publication number: 20090235356
    Abstract: A system and method of determining an answer in an expert system having an inference engine and a knowledge database includes transmitting a query or sub-queries to a plurality of sub-expert systems, each comprising an associated inference engine and an associated knowledge database; receiving a sub-answer from each sub-expert system which has been inferred by the inference engine based upon knowledge in the knowledge database; transmitting the sub-answers to the expert system using the inference engine thereof to infer an answer to the query based upon knowledge in the knowledge database and the sub-answers received from the sub-expert systems; and transmitting the answer. A system for managing data includes a computer interface with a database arrangement that stores domain-related information, and which communicates with an inference engine that infers query results based upon the domain-related information and partial answers obtained from knowledge databases.
    Type: Application
    Filed: February 19, 2009
    Publication date: September 17, 2009
    Applicant: CLEAR BLUE SECURITY, LLC
    Inventors: Robert JENSEN, Dennis THOMSEN
  • Publication number: 20090235014
    Abstract: A computing system and a memory device are provided. The memory device includes a first memory having a first storage capacity, a second memory having a second storage capacity greater than the first storage capacity, and a controller to provide an external host with an address space corresponding to a third storage capacity, the third storage capacity being less than a sum of the first storage capacity and second storage capacity, wherein the controller, where data requested from the external host is stored in the first memory, transmits the requested data to the external host from the first memory, and where the requested data is not stored in the first memory, transmits the requested data to the external host from the second memory.
    Type: Application
    Filed: July 21, 2008
    Publication date: September 17, 2009
    Inventors: Keun Soo YIM, Jae Cheol Son, Bong Young Chung
  • Publication number: 20090222625
    Abstract: A data processing apparatus and method are provided for detecting cache misses. The data processing apparatus has processing logic for executing a plurality of program threads, and a cache for storing data values for access by the processing logic. When access to a data value is required while executing a first program thread, the processing logic issues an access request specifying an address in memory associated with that data value, and the cache is responsive to the address to perform a lookup procedure to determine whether the data value is stored in the cache. Indication logic is provided which in response to an address portion of the address provides an indication as to whether the data value is stored in the cache, this indication being produced before a result of the lookup procedure is available, and the indication logic only issuing an indication that the data value is not stored in the cache if that indication is guaranteed to be correct.
    Type: Application
    Filed: September 13, 2005
    Publication date: September 3, 2009
    Inventors: Mrinmoy Ghosh, Emre Özer, Stuart David Biles
  • Publication number: 20090216951
    Abstract: A system, method, and computer program product for handling shared cache lines to allow forward progress among processors in a multi-processor environment is provided. A counter and a threshold are provided a processor of the multi-processor environment, such that the counter is incremented for every exclusive cross interrogate (XI) reject that is followed by an instruction completion, and reset on an exclusive XI acknowledgement. If the XI reject counter reaches a preset threshold value, the processor's pipeline is drained by blocking instruction issue and prefetching attempts, creating a window for an exclusive XI from another processor to be honored, after which normal instruction processing is resumed. Configuring the preset threshold value as a programmable value allows for fine-tuning of system performance.
    Type: Application
    Filed: February 22, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chung-Lung Kevin Shum, Charles F. Webb
  • Publication number: 20090216915
    Abstract: A method for merging data including receiving a request from an input/output device to merge a data, wherein a merge of the data includes a manipulation of the data, determining if the data exists in a local cache memory that is in local communication with the input/output device, fetching the data to the local cache memory from a remote cache memory or a main memory if the data does not exist in the local cache memory, merging the data according to the request to obtain a merged data, and storing the merged data in the local cache, wherein the merging of the data is performed without using a memory controller within a control flow or a data flow of the merging of the data. A corresponding system and computer program product.
    Type: Application
    Filed: February 25, 2008
    Publication date: August 27, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Deanna P. Dunn, Robert J. Sonnelitter, III, Gary E. Strait
  • Publication number: 20090210069
    Abstract: A multicore processor for industrial control provides for the execution of separate operating systems on the cores under control of one of the cores to tailor the operating system to optimum execution of different applications of industrial control and communication. One core may provide for a reduced instruction set for execution of industrial control programs with the remaining cores providing a general-purpose instruction set.
    Type: Application
    Filed: April 29, 2009
    Publication date: August 20, 2009
    Inventors: Ronald E. Schultz, Scot A. Tutkovics, Richard J. Grgic, James J. Kay, James W. Kenst, Daniel W. Clark
  • Publication number: 20090193198
    Abstract: A method of preventing lockout and stalling conditions in a multi-node system having a plurality of nodes which includes initiating a processor request to a shared level of cache in a requesting node, performing a fabric coherency establishment sequence on the plurality of nodes, issuing a speculative memory fetch request to a memory, detecting a conflict on one of the plurality of nodes and communicating the conflict back to the requesting node within the system, canceling the speculative memory fetch request issued, and repeating the fabric coherency establishment sequence in the system until the point of conflict is resolved, without issuing another speculative memory fetch request. The subsequent memory fetch request is only issued after determining the state of line within the system, after the successful completion of the multi-node fabric coherency establishment sequence.
    Type: Application
    Filed: January 29, 2008
    Publication date: July 30, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vesselina K. Papazova, Michael A. Blake, Pak-kin Mak, Arthur J. O'Neill, Jr., Craig R. Waters
  • Publication number: 20090164732
    Abstract: A cache memory system, which is individually connected to each of a plurality of arithmetic units that access a shared memory to carry out parallel processing, includes: a data array that has a plurality of blocks that are composed of a plurality of words; a storage unit that, with respect to a block, which stores data in at least one of the words, among the plurality of blocks, stores an address group of the shared memory that is placed in correspondence with that block; a write unit that, when an address from said arithmetic unit is not in the storage unit at the time of writing of data from the arithmetic unit, allocates any of the plurality of blocks as a block for writing, places any word in that block for writing in correspondence with the address, and writes the data from the arithmetic unit to the word; a word state storage unit that stores word state information for specifying a word, into which the data from the arithmetic unit have been written, in association with an address that has been placed i
    Type: Application
    Filed: December 16, 2008
    Publication date: June 25, 2009
    Applicant: NEC Corporation
    Inventor: Yasushi KANOH
  • Publication number: 20090157965
    Abstract: Software indicates to hardware of a processing system that its storage modification to a particular cache line is done, and will not be doing any modification for the time being. With this indication, the processor actively releases its exclusive ownership by updating its line ownership from exclusive to read-only (or shared) in its own cache directory and in the storage controller (SC). By actively giving up the exclusive rights, another processor can immediately be given exclusive ownership to that said cache line without waiting on any processor's explicit cross invalidate acknowledgement. This invention also describes the hardware design needed to provide this support.
    Type: Application
    Filed: December 12, 2007
    Publication date: June 18, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Chung-Lung Kevin Shum, Kathryn Marie Jackson, Charles Franklin Webb
  • Publication number: 20090144388
    Abstract: A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The clustered memory cache is accessible by a plurality of clients on the computer network and is configured to perform page caching of data items accessed by the clients. The network also includes a policy engine operatively coupled with the clustered memory cache, where the policy engine is configured to control where data items are cached in the clustered memory cache.
    Type: Application
    Filed: November 6, 2008
    Publication date: June 4, 2009
    Applicant: RNA NETWORKS, INC.
    Inventors: Jason P. Gross, Ranjit B. Pandit, Clive G. Cook, Thomas H. Matson
  • Publication number: 20090138220
    Abstract: A directory-based coherency method, system and program are provided for intervening a requested cache line from a plurality of candidate memory sources in a multiprocessor system on the basis of the sensed temperature or power dissipation value at each memory source. By providing temperature or power dissipation sensors in each of the candidate memory sources (e.g., at cores, cache memories, memory controller, etc.) that share a requested cache line, control logic may be used to determine which memory source should source the cache line by using the power sensor signals to signal only the memory source with acceptable power dissipation to provide the cache line to the requester.
    Type: Application
    Filed: November 28, 2007
    Publication date: May 28, 2009
    Inventors: Robert H. Bell, JR., Louis B. Capps, JR., Thomas E. Cook, Michael J. Shapiro, Naresh Nayar
  • Publication number: 20090138660
    Abstract: A snoop coherency method, system and program are provided for intervening a requested cache line from a plurality of candidate memory sources in a multiprocessor system on the basis of the sensed temperature or power dissipation value at each memory source. By providing temperature or power dissipation sensors in each of the candidate memory sources (e.g., at cores, cache memories, memory controller, etc.) that share a requested cache line, control logic may be used to determine which memory source should source the cache line by using the power sensor signals to signal only the memory source with acceptable power dissipation to provide the cache line to the requester.
    Type: Application
    Filed: November 28, 2007
    Publication date: May 28, 2009
    Inventors: Robert H. Bell, JR., Louis B. Capps, JR., Thomas E. Cook, Michael J. Shapiro, Naresh Nayar
  • Publication number: 20090132059
    Abstract: A multicore processor for industrial control provides for the execution of separate operating systems on the cores under control of one of the cores to tailor the operating system to optimum execution of different applications of industrial control and communication. One core may provide for a reduced instruction set for execution of industrial control programs with the remaining cores providing a general-purpose instruction set.
    Type: Application
    Filed: November 13, 2008
    Publication date: May 21, 2009
    Inventors: Ronald E. Schultz, Scot A. Tutkovics, Richard J. Grgic, James J. Kay, James W. Kenst, Daniel W. Clark
  • Publication number: 20090083490
    Abstract: A system to improve data store throughput for a shared-cache of a multiprocessor structure that may include a controller to find and compare a last data store address for a last data store with a next data store address for a next data store. The system may also include a main pipeline to receive the last data store, and to receive the next data store if the next data store address differs substantially from the last data store address. The system may further include a store pipeline to receive the next data store if the next data store address is substantially similar to the last data store address.
    Type: Application
    Filed: September 26, 2007
    Publication date: March 26, 2009
    Applicant: INTERNATIONAL BUSINESS MACHINES COPRORATION
    Inventors: Derrin M. Berger, Michael F. Fee, Park-kin Mak