Addressing Of Memory Level In Which Access To Desired Data Or Data Block Requires Associative Addressing Means, E.g., Cache, Etc. (epo) Patents (Class 711/E12.017)

  • Publication number: 20100318743
    Abstract: When a user interface cursor hovers over a user interface item, a determination is made as to whether the user interface item has an associated screentip. If the user interface item has an associated screentip, text associated with the screentip is identified, a translated text string is located for the text string, and the translated text string is displayed in the screentip. If the user interface item does not have an associated screentip, a determination is made as to whether the user interface item contains a text string. If so, a determination is made as to whether a translated text string is available that corresponds to the text in the user interface item. If so, the translated text string is displayed in a screentip for the user interface item.
    Type: Application
    Filed: June 10, 2009
    Publication date: December 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Colin Fitzpatrick, John Patrick Whelan, Robert Patrick Doyle, John Gerard Lane, Barry McHugh, Terry Farrell, Paul Barnes, Andre Michael McQuaid, David Mowatt
  • Publication number: 20100318839
    Abstract: In a nonvolatile memory array, data is stored in multi-level cells (MLC) as upper-page data and lower-page data. Safe copies of both upper-page and lower-page data are stored in on-chip cache during programming. If a write fail occurs, data is recovered from on-chip cache. The controller does not have to maintain safe copies of data.
    Type: Application
    Filed: June 16, 2009
    Publication date: December 16, 2010
    Applicant: SANDISK CORPORATION
    Inventors: Chris Nga Yee Avila, Jonathan Hsu, Alexander Kwok-Tung Mak, Chen Jian, Grishma Shailesh Shah
  • Publication number: 20100318858
    Abstract: A method for validating SRS registry transaction data includes receiving OLTP transaction data from a first database, parsing the OLTP transaction data, and comparing the parsed OLTP transaction data to one or more of a set of profiles. Each of the one or more of the set of profiles includes metadata in XML files. The method also includes caching the parsed OLTP transaction data in a first data cache, receiving log data associated with the OLTP transaction data; and caching the log data in a second data cache. The method further includes correlating the parsed transaction data cached in the first data cache with the log data cached in the second data cache.
    Type: Application
    Filed: June 15, 2009
    Publication date: December 16, 2010
    Applicant: VeriSign, Inc.
    Inventors: Tarik R. Essawi, Nageswararao Chigurupati
  • Publication number: 20100318740
    Abstract: A <<dynamic data cache module>> is inserted between the archiving subsystem (e.g. relational database writing API) and the tag data flow from the acquisition server. Then client data requests must be routed always through the <<dynamic data cache module>>. The dynamic data cache module>> is able to manage tag data that is not only coming from real-time acquisition (i.e. keeping the last n values of tag data in the cache) but also <<chunk>> of data in a different time span. For this usage, the cache will be size-limited and a last recently used (LRU) algorithm may be used to free up space when needed.
    Type: Application
    Filed: May 10, 2010
    Publication date: December 16, 2010
    Applicant: SIEMENS AKTIENGESELLSCHAFT
    Inventor: Corradino Guerrasio
  • Publication number: 20100312999
    Abstract: One or more of the present techniques provide a compute engine buffer configured to maneuver data and increase the efficiency of a compute engine. One such compute engine buffer is connected to a compute engine which performs operations on operands retrieved from the buffer, and stores results of the operations to the buffer. Such a compute engine buffer includes a compute buffer having storage units which may be electrically connected or isolated, based on the size of the operands to be stored and the configuration of the compute engine. The compute engine buffer further includes a data buffer, which may be a simple buffer. Operands may be copied to the data buffer before being copied to the compute buffer, which may save additional clock cycles for the compute engine, further increasing the compute engine efficiency.
    Type: Application
    Filed: June 4, 2009
    Publication date: December 9, 2010
    Applicant: MICRON TECHNOLOGY, INC.
    Inventor: Robert Walker
  • Publication number: 20100313061
    Abstract: A method provides exception handling for a computer system. As an error in the computer system's hardware is detected, an exception vector pertaining to the hardware error is determined, and execution flow is transferred to a dispatcher that corresponds/pertains to the exception vector. A specific instance of a plurality of instances of a main exception handler is selected, and the specific instance of the main exception handler is executed. The actual exception handler thus contains two distinct parts, a dispatcher, which is unique and preferably resides in a safe memory region, and a main exception handler, multiple copies of which reside in an unsafe memory region.
    Type: Application
    Filed: May 25, 2010
    Publication date: December 9, 2010
    Applicant: IBM CORPORATION
    Inventors: Thomas Huth, Jan Kunigk, Joerg-Stephan Vogt
  • Publication number: 20100312966
    Abstract: Embodiments of the present disclosure provide methods and systems for securely installing software on a computing device, such as a mobile device. In one embodiment, the device executes an installer that securely installs the software. In order to perform installations securely, the installer configures one or more secure containers for the software and installs the software exclusively in these containers. In some embodiments, the installer randomly determines the identifiers for the containers. These identifiers remain unknown to the software to be installed. Instead, an installation framework maintains the correspondence between an application and its container. Other methods and apparatuses are also described.
    Type: Application
    Filed: June 3, 2009
    Publication date: December 9, 2010
    Applicant: APPLE INC.
    Inventors: Dallas De Atley, Simon Cooper
  • Publication number: 20100312957
    Abstract: A translation look-aside buffer (TLB) has a TAG memory for determining if a desired translated address is stored in the TLB. A TAG portion is compared to contents of the TAG memory without requiring a read of the TAG memory because the TAG memory has a storage portion that is constructed as a CAM. For each row of the CAM a match determination is made that indicates if the TAG portion is the same as contents of the particular row. A decoder decodes an index portion and provides an output for each row. On a per row basis the output of the decoder is logically combined with the hit/miss signals to determine if there is a hit for the TAG memory. If there is a hit for the TAG memory, a translated address corresponding to the index portion of the address is then output as the selected translated address.
    Type: Application
    Filed: June 9, 2009
    Publication date: December 9, 2010
    Inventors: Ravindraraj Ramaraju, Jogendra C. Sarker, Vu N. Tran
  • Publication number: 20100313079
    Abstract: A method and an apparatus that instructs a compiler server to build or otherwise obtain a compiled code corresponding to a compilation request received from an application are described. The compiler server may be configured to compile source codes for a plurality of independent applications, each running in a separate process, using a plurality of independent compilers, each running in a separate compiler process. A search may be performed in a cache for a compiled code that satisfies a compilation request received from an application. A reply message including the compiled code can be provided for the application, wherein the compiled code is compiled in direct response to the request, or is obtained from the cache if the search identifies in the cache the compiled code that satisfies the compilation request.
    Type: Application
    Filed: June 3, 2009
    Publication date: December 9, 2010
    Inventors: Robert Beretta, Nicholas William Burns, Nathaniel Begeman, Phillip Kent Miller, Geoffrey Grant Stahl
  • Patent number: 7848331
    Abstract: A method for processing a packet that includes receiving the packet where the packet comprises a header, and traversing a flow table comprising a plurality of flow table entries (FTEs) for each FTE encountered during the traversal, obtaining a packet matching function associated with the FTE, applying the packet matching function associated with the FTE to the header to determine whether the packet matches the FTE, if the packet matches the FTE, send the packet to one selected from the group consisting of one of a plurality of receive rings (RRs) and a first sub-flow table, where the first sub-flow table is associated with the FTE, stopping the traversal of the flow table, and if the packet does not match the FTE continue the traversal of the flow table.
    Type: Grant
    Filed: July 20, 2006
    Date of Patent: December 7, 2010
    Assignee: Oracle America, Inc.
    Inventors: Kais Belgaied, Nicolas G. Droux, Sunay Tripathi
  • Publication number: 20100306476
    Abstract: Instruction fetch unit (IFU) verification is improved by dynamically monitoring the current state of the IFU model and detecting any predetermined states of interest. The instruction address sequence is automatically modified to force a selected address to be fetched next by the IFU model. The instruction address sequence may be modified by inserting one or more new instruction addresses, or by jumping to a non-sequential address in the instruction address sequence. In exemplary implementations, the selected address is a corresponding address for an existing instruction already loaded in the IFU cache, or differs only in a specific field from such an address. The instruction address control is preferably accomplished without violating any rules of the processor architecture by sending a flush signal to the IFU model and overwriting an address register corresponding to a next address to be fetched.
    Type: Application
    Filed: June 2, 2009
    Publication date: December 2, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Akash V. Giri, Darin M. Greene, Alan G. Singletary
  • Publication number: 20100306234
    Abstract: Methods, systems, and media are provided for synchronizing information across multiple environments of a synchronization system. A search query is received into a frontend infrastructure of a first synchronization environment. The frontend infrastructure checks a local cache manager to see if results already exist for the search query. If existing results are not found, then one or more backend search engines of the first synchronization environment are utilized for the search query. The search results from the backend search engines are saved into the local cache manager of the first synchronization environment. A cache sync notification is created to identify the contents and location of the actual saved results. The cache sync notification is saved in a cache synchronization service located within the first synchronization environment, and broadcast to all other synchronization environments within the synchronization system. The actual results can be retrieved from any other synchronization environment.
    Type: Application
    Filed: May 28, 2009
    Publication date: December 2, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: JUNHUA WANG, GAURAV SAREEN, YANBIAO ZHAO
  • Publication number: 20100306263
    Abstract: Apparatuses and methods to perform pattern matching are presented. In one embodiment, an apparatus comprises a memory to store a first pattern table comprising information indicative of whether a byte of input data matches a pattern and whether to ignore other matches of the pattern occur in remaining bytes of the input data. The apparatus further comprises one-byte match logic coupled to the memory, to determine, based on the information in the first pattern table, a one-byte match event with respect to the input data. The apparatus further comprises a control unit to filter the other matches of the pattern based on the information of the first pattern table.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Inventors: David K. Cassetti, Sanjeev Jain, Christopher F. Clark, Lokpraveen Bhupathy Mosur
  • Publication number: 20100306503
    Abstract: A microprocessor includes a cache memory, an instruction set having first and second prefetch instructions each configured to instruct the microprocessor to prefetch a cache line of data from a system memory into the cache memory, and a memory subsystem configured to execute the first and second prefetch instructions. For the first prefetch instruction the memory subsystem is configured to forego prefetching the cache line of data from the system memory into the cache memory in response to a predetermined set of conditions. For the second prefetch instruction the memory subsystem is configured to complete prefetching the cache line of data from the system memory into the cache memory in response to the predetermined set of conditions.
    Type: Application
    Filed: May 17, 2010
    Publication date: December 2, 2010
    Applicant: VIA TECHNOLOGIES, INC.
    Inventors: G. Glenn Henry, Colin Eddy, Rodney E. Hooker
  • Publication number: 20100306469
    Abstract: A processing apparatus externally receives a processing request and executes the requested processing. The processing apparatus transmits the result of the processing to a processing request source if a connection to the processing request source is maintained until the requested processing is executed. The processing apparatus stores the result of executing the processing in a memory if the connection to the processing request source is disconnected before the end of the requested processing. The processing apparatus transmits the processing result stored in the memory to the processing request source if the processing requested when the processing request is received is executed but is stored in the memory.
    Type: Application
    Filed: April 26, 2010
    Publication date: December 2, 2010
    Applicant: CANON KABUSHIKI KAISHA
    Inventors: Makiko Ishiguro, Shingo Iwasaki
  • Publication number: 20100299484
    Abstract: An apparatus detects a load-store collision within a microprocessor between a load operation and an older store operation each of which accesses data in the same cache line. Load and store byte masks specify which bytes contain the data specified by the load and store operation within a word of the cache line in which the load and data begins, respectively. Load and store word masks specify which words contain the data specified by the load and store operations within the cache line, respectively. Combinatorial logic uses the load and store byte masks to detect the load-store collision if the data specified by the load and store operations begin in the same cache line word, and uses the load and store word masks to detect the load-store collision if the data specified by the load and store operations do not begin in the same cache line word.
    Type: Application
    Filed: October 20, 2009
    Publication date: November 25, 2010
    Applicant: VIA Technologies, Inc.
    Inventors: Rodney E. Hooker, Colin Eddy
  • Publication number: 20100299483
    Abstract: An apparatus extracts instructions from a stream of undifferentiated instruction bytes in a microprocessor having an instruction set architecture in which the instructions are variable length. Decoders generate an associated start/end mark for each instruction byte of a line from a first queue of entries each storing a line of instruction bytes. A second queue has entries each storing a line received from the first queue along with the associated start/end marks.
    Type: Application
    Filed: October 1, 2009
    Publication date: November 25, 2010
    Applicant: VIA Technologies, Inc.
    Inventor: Thomas C. McDonald
  • Publication number: 20100299479
    Abstract: For each memory location in a set of memory locations associated with a thread, setting an indication associated with the memory location to request a signal if data from the memory location is evicted from a cache; and in response to the signal, reloading the set of memory locations into the cache.
    Type: Application
    Filed: September 17, 2009
    Publication date: November 25, 2010
    Inventors: Mark Buxton, Ernie Brickell, Quinn A. Jacobson, Hong Wang, Baiju Patel
  • Publication number: 20100299480
    Abstract: A method and system of executing stack-based memory reference code. At least some of the illustrated embodiments are methods comprising waking a computer system from a reduced power operational state in which a memory controller loses at least some configuration information, executing memory reference code that utilizes a stack (wherein the memory reference code configures the main memory controller), and passing control of the computer system to an operating system. The time between executing a first instruction after waking the computer system and passing control to the operating system takes less than 200 milliseconds.
    Type: Application
    Filed: August 2, 2010
    Publication date: November 25, 2010
    Inventors: Louis B. Hobson, Mark A. Piwonka
  • Publication number: 20100299558
    Abstract: An apparatus includes a cache memory for storing user data and control information of the apparatus, a nonvolatile memory and a processor for executing a process including when the power failure occurs, saving the user data and the control information stored in the cache memory into the nonvolatile memory, when the power failure recovers, restoring the data stored in the nonvolatile memory into the cache memory, and erasing the data stored in the nonvolatile memory after restoring the data into the cache memory and when another power failure occurs during erasing the data stored in the nonvolatile memory, erasing the control information stored in the nonvolatile memory if the control information is remained in the nonvolatile memory and saving, into the nonvolatile memory, the updated control information stored in the cache memory and the user data which has been erased from the nonvolatile memory.
    Type: Application
    Filed: May 12, 2010
    Publication date: November 25, 2010
    Applicant: FUJITSU LIMITED
    Inventors: Mihoko TOJO, Hidefumi Kobayashi, Yusuke Oota, Satoshi Hayashi, Keiichi Umezawa
  • Patent number: 7840752
    Abstract: A database engine is provided with memory management policies to dynamically configure an area of memory called a buffer pool into which data pages are held during processing. The data pages are also buffered as an I/O (input/output) stream when read and written to a persistent storage medium, such as a hard disk, through use of a system file cache that is managed by the computer's operating system. The memory management policies implement capping the amount of memory used within the buffer pool to minimize the number of data pages that are double-buffered (i.e., held in both the buffer pool and system file cache). In addition, trimming data pages from the buffer pool, after the database engine completes all pending operations and requests, frees additional memory and further minimizes the number of processes associated with the database.
    Type: Grant
    Filed: October 30, 2006
    Date of Patent: November 23, 2010
    Assignee: Microsoft Corporation
    Inventors: Norbert Hu, Sethu M. Kalavakur, Anthony F. Voellm
  • Publication number: 20100293325
    Abstract: A system, comprising: a plurality of modules, each module comprising a plurality of integrated circuits devices coupled to a module bus and a channel interface that communicates with a memory controller, at least a first module having a portion of its total module address space composed of first type memory cells having a first maximum access speed, and at least a second module having a portion of its total module address space composed of second type memory cells having a second maximum access speed slower than the first access speed.
    Type: Application
    Filed: June 18, 2010
    Publication date: November 18, 2010
    Applicant: Cypress Semiconductor Corporation
    Inventor: Dinesh Maheshwari
  • Publication number: 20100293331
    Abstract: A storage system, which is coupled to a computer, includes a storage device, a controller, a plurality of cache memory units, and a connecting unit. Each of the plurality of cache memory units includes: a cache memory for storing data; an auxiliary storage device for holding a content of data even after shutdown of power; and a cache controller for controlling an input/output of data to/from the cache memory and the auxiliary storage device. The cache controller store data stored in the cache memory, which is divided into a plurality of parts, into a plurality of the auxiliary storage devices included in the plurality of cache memory units.
    Type: Application
    Filed: October 8, 2008
    Publication date: November 18, 2010
    Inventors: Masanori Fujii, Tsukasa Nishimura, Sumihiro Miura, Yoshinori Okubo
  • Publication number: 20100293312
    Abstract: Described embodiments provide a system having a plurality of processor cores and common memory in direct communication with the cores. A source processing core communicates with a task destination core by generating a task message for the task destination core. The task source core transmits the task message directly to a receiving processing core adjacent to the task source core. If the receiving processing core is not the task destination core, the receiving processing core passes the task message unchanged to a processing core adjacent the receiving processing core. If the receiving processing core is the task destination core, the task destination core processes the message.
    Type: Application
    Filed: May 18, 2010
    Publication date: November 18, 2010
    Inventors: David P. Sonnier, William G. Burroughs, Narender R. Vangati, Deepak Mital, Robert J. Munoz
  • Publication number: 20100293330
    Abstract: One or more transition images are displayed during a transition period between a display of slides within a presentation. The displayed transition images include images of different slides that are contained within the presentation. The transition images provide the audience with a glimpse of slides that are displayed within the presentation. For example, the transition images may include images from previous and future slides that are contained within the presentation. The transition images may also be cached in order to more efficiently display the transition images during the transition period.
    Type: Application
    Filed: May 14, 2009
    Publication date: November 18, 2010
    Applicant: Microsoft Corporation
    Inventors: Christopher Michael Maloney, Daly Chang, Esther Ho, Jason Zhao, Mark Pearson
  • Publication number: 20100293336
    Abstract: A system and method for increasing cache size is provided. Generally, the system contains a storage device having storage blocks therein and a memory. A processor is also provided, which is configured by the memory to perform the steps of: categorizing storage blocks within the storage device as within a first category of storage blocks if the storage blocks that are available to the system for storing data when needed; categorizing storage blocks within the storage device as within a second category of storage blocks if the storage blocks contain application data therein; and categorizing storage blocks within the storage device as within a third category of storage blocks if the storage blocks are storing cached data and are available for storing application data if no first category of storage blocks are available to the system.
    Type: Application
    Filed: May 18, 2009
    Publication date: November 18, 2010
    Inventors: Derry Shribman, Ofer Vilenski
  • Publication number: 20100293340
    Abstract: A wake-and-go mechanism is provided for a data processing system. The wake-and-go mechanism is configured to issue a look-ahead load command on a system bus to read a data value from a target address and perform a comparison operation to determine whether the data value at the target address indicates that an event for which a thread is waiting has occurred. In response to the comparison resulting in a determination that the event has not occurred, the wake-and-go engine populates a wake-and-go storage array with the target address and snooping the target address on the system bus without data exclusivity. In response to the comparison resulting in a determination that the event has occurred, the wake-and-go engine issues a load command on the system bus to read the data value from the target address with data exclusivity.
    Type: Application
    Filed: February 1, 2008
    Publication date: November 18, 2010
    Inventors: Ravi K. Arimilli, Satya P. Sharma, Randal C. Swanberg
  • Publication number: 20100293345
    Abstract: Described embodiments provide a memory system including a plurality of addressable memory arrays. Data in the arrays is accessed by receiving a logical address of data in the addressable memory array and computing a hash value based on at least a part of the logical address. One of the addressable memory arrays is selected based on the hash value. Data in the selected addressable memory array is accessed using a physical address based on at least part of the logical address not used to compute the hash value. The hash value is generated by a hash function to provide essentially random selection of each of the addressable memory arrays.
    Type: Application
    Filed: May 18, 2010
    Publication date: November 18, 2010
    Inventors: David P. Sonnier, Michael R. Betker
  • Publication number: 20100287327
    Abstract: A computing system is provided. A flash memory device includes at least one mapping block, at least one modification block and at least one cache block. A processor is configured to perform: receiving a write command with a write logical address and predetermined data, loading content of a cache page from the cache block corresponding to the modification block according to the write logical address to a random access memory device in response to that a page of the mapping block corresponding to the write logical address has been used, the processor, reading orderly the content of the cache page stored in the random access memory device to obtain location information of an empty page of the modification block, and writing the predetermined data to the empty page according to the location information. Each cache page includes data fields to store location information corresponding to the data has been written in the pages of the modification block in order.
    Type: Application
    Filed: February 15, 2010
    Publication date: November 11, 2010
    Applicant: VIA TELECOM, INC.
    Inventors: Rong Li, Huaqiao Wang, Yuefeng Jin
  • Publication number: 20100281274
    Abstract: The various embodiments of the invention provide a method for executing code securely in a general purpose computer. According to one embodiment, a code is downloaded into a cache memory of a computer in which the code is to be executed. The code downloaded into the cache memory is encrypted in the cache memory. Then the encrypted code in the cache memory is decrypted using a decryption algorithm to obtain the decrypted code. The decrypted code is executed in the cache to generate a result. The decrypted code is destroyed in the cache memory after the forwarding the result to a user.
    Type: Application
    Filed: May 1, 2009
    Publication date: November 4, 2010
    Inventors: Bhaktha Ram Keshavachar, Navin Govind
  • Publication number: 20100281216
    Abstract: A method implements a cache-policy switching module in a storage system. The storage system includes a cache memory to cache storage data. The cache memory uses a first cache configuration. The cache-policy switching module emulates the caching of the storage data with a plurality of cache configurations. Upon a determination that one of the plurality of cache configurations performs better than the first cache configuration, the cache-policy switching module automatically applies the better performing cache configuration to the cache memory for caching the storage data.
    Type: Application
    Filed: April 30, 2009
    Publication date: November 4, 2010
    Applicant: NetApp, Inc.
    Inventors: Naresh Patel, Jeffrey S. Kimmel, Garth Goodson
  • Publication number: 20100281217
    Abstract: The present invention is directed towards a method and system for modifying by a cache responses from a server that do not identify a dynamically generated object as cacheable to identify the dynamically generated object to a client as cacheable in the response. In some embodiments, such as an embodiment handling HTTP requests and responses for objects, the techniques of the present invention insert an entity tag, or “etag” into the response to provide cache control for objects provided without entity tags and/or cache control information from an originating server. This technique of the present invention provides an increase in cache hit rates by inserting information, such as entity tag and cache control information for an object, in a response to a client to enable the cache to check for a hit in a subsequent request.
    Type: Application
    Filed: July 16, 2010
    Publication date: November 4, 2010
    Inventors: Prabakar Sundarrajan, Prakash Khemani, Kailash Kailash, Ajay Soni, Rajiv Sinha, Saravana Annamalaisami, Bharath Bhushan K R, Anil Kumar
  • Publication number: 20100274969
    Abstract: Methods and apparatuses are provided for active-active support of virtual storage management in a storage area network (“SAN”). When a storage manager (that manages virtual storage volumes) of the SAN receives data to be written to a virtual storage volume from a computer server, the storage manager determines whether the writing request may result in updating a mapping of the virtual storage volume to a storage system. When the writing request does not involve updating the mapping, which happens most of the time, the storage manager simply writes the data to the storage system based on the existing mapping. Otherwise, the storage manager sends an updating request to another storage manager for updating a mapping of the virtual storage volume to a storage volume. Subsequently, the storage manager writes the data to the corresponding storage system based on the mapping that has been updated by the another storage manager.
    Type: Application
    Filed: April 23, 2009
    Publication date: October 28, 2010
    Applicant: LSI CORPORATION
    Inventors: Vladimir Popovski, Ishai Nadler, Nelson Nahum
  • Publication number: 20100275049
    Abstract: Embodiments that dynamically conserve power in non-uniform cache access (NUCA) caches are contemplated. Various embodiments comprise a computing device, having one or more processors coupled with one or more NUCA cache elements. The NUCA cache elements may comprise one or more banks of cache memory, wherein ways of the cache are vertically distributed across multiple banks. To conserve power, the computing devices generally turn off groups of banks, in a sequential manner according to different power states, based on the access latencies of the banks. The computing devices may first turn off groups having the greatest access latencies. The computing devices may conserve additional power by turning of more groups of banks according to different power states, continuing to turn off groups with larger access latencies before turning off groups with the smaller access latencies.
    Type: Application
    Filed: April 24, 2009
    Publication date: October 28, 2010
    Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Ganesh Balakrishnan, Anil Krishna
  • Publication number: 20100274970
    Abstract: A recursive DNS nameserver system and related domain name resolution techniques are disclosed. The DNS nameservers utilize a local cache having previously retrieved domain name resolution to avoid recursive resolution processes and the attendant DNS requests. If a matching record is found with a valid (not expired) TTL field, the nameserver returns the cached domain name information to the client. If the TTL for the record in the cache has expired and the nameserver is unable to resolve the domain name information using DNS requests to authoritative servers, the recursive DNS nameserver returns to the cache and accesses the resource record having an expired TTL. The nameserver generates a DNS response to the client device that includes the domain name information from the cached resource record. In various embodiments, subscriber information is utilized to resolve the requested domain name information in accordance with user-defined preferences.
    Type: Application
    Filed: March 12, 2010
    Publication date: October 28, 2010
    Applicant: OPENDNS, INC.
    Inventors: Noah Treuhaft, David Ulevitch, Michael Damm
  • Publication number: 20100275044
    Abstract: Embodiments that that distribute replacement policy bits and operate the bits in cache memories, such as non-uniform cache access (NUCA) caches, are contemplated. An embodiment may comprise a computing device, such as a computer having multiple processors or multiple cores, which has cache memory elements coupled with the multiple processors or cores. The cache memory device may track usage of cache lines by using a number of bits. For example, a controller of the cache memory may manipulate bits as part of a pseudo least recently used (LRU) system. Some of the bits may be in a centralized area of the cache. Other bits of the pseudo LRU system may be distributed across the cache. Distributing the bits across the cache may enable the system to conserve additional power by turning off the distributed bits.
    Type: Application
    Filed: April 24, 2009
    Publication date: October 28, 2010
    Applicant: International Business Machines Corporation
    Inventors: Ganesh Balakrishnan, Anil Krishna
  • Publication number: 20100272335
    Abstract: Methods, systems and apparatuses for processing data associated with nuclear medical imaging techniques are provided. Data is ordered in LUT's and memory structures. Articles of manufacture are provided for causing computers to carry out aspects of the invention. Data elements are ordered into a plurality of ordered data groups according to a spatial index order, and fetched and processed in the spatial index order. The data elements include sensitivity matrix elements, PET annihilation event data, and system and image matrix elements, the data grouped in orders corresponding to their processing. In one aspect geometric symmetry of a PET scanner FOV is used in ordering the data and processing. In one aspect a system matrix LUT comprises total number of system matrix elements equal to a total number of image matrix elements divided by a total number of possible third index values.
    Type: Application
    Filed: February 5, 2007
    Publication date: October 28, 2010
    Applicant: KONINKLIJKE PHILIPS ELECTRONICS N. V.
    Inventors: Zhiqiang Hu, Wenli Wang, Daniel Gagnon
  • Publication number: 20100268887
    Abstract: An information handling system (IHS) includes a processor with a cache memory system. The processor includes a processor core with an L1 cache memory that couples to an L2 cache memory. The processor includes an arbitration mechanism that controls load and store requests to the L2 cache memory. The arbitration mechanism includes control logic that enables a load request to interrupt a store request that the L2 cache memory is currently servicing. The L2 cache memory includes dual data banks so that one bank may perform a load operation while the other bank performs a store operation. The cache system provides dual dispatch points into the data flow to the dual cache banks of the L2 cache memory.
    Type: Application
    Filed: April 15, 2009
    Publication date: October 21, 2010
    Applicants: INTERNATIONAL BUSINESS MACHINES CORPORATION, IBM Corporation
    Inventors: Sanjeev Ghai, Guy Lynn Guthrie, Hugh Shen, William John Starke
  • Publication number: 20100269102
    Abstract: Systems, methods, and apparatuses for decomposing a sequential program into multiple threads, executing these threads, and reconstructing the sequential execution of the threads are described. A plurality of data cache units (DCUs) store locally retired instructions of speculatively executed threads. A merging level cache (MLC) merges data from the lines of the DCUs. An inter-core memory coherency module (ICMC) globally retire instructions of the speculatively executed threads in the MLC.
    Type: Application
    Filed: November 24, 2009
    Publication date: October 21, 2010
    Inventors: Fernando Latorre, Josep M. Codina, Enric Gibert Codina, Pedro Lopez, Carlos Madriles, Alejandro Martinez Vincente, Raul Martinez, Antonio Gonzalez
  • Publication number: 20100268987
    Abstract: Embodiments of circuits for processors with multiple redundancy techniques for mitigating radiation errors are described herein. Other embodiments and related methods and examples are also described herein.
    Type: Application
    Filed: November 25, 2009
    Publication date: October 21, 2010
    Applicant: Arizona Board of Regents, for and behalf of Arizona State University
    Inventors: Lawrence T. Clark, Dan W. Patterson
  • Publication number: 20100268888
    Abstract: Disclosed are a method, a system, and a program product for processing a data stream by accessing one or more hardware registers of a processor. In one or more embodiments, a first program instruction or subroutine can associate a hardware register of the processor with a data stream. With this association, the hardware register can be used as a stream head which can be used by multiple program instructions to access the data stream. In one or more embodiments, data from the data stream can be fetched automatically as needed and with one or more patterns which may include one or more start positions, one or more lengths, one or more strides, etc. to allow the cache to be populated with sufficient amounts of data to reduce memory latency and/or external memory bandwidth when executing an application which accesses the data stream through the one or more registers.
    Type: Application
    Filed: April 16, 2009
    Publication date: October 21, 2010
    Applicant: INTERNATIONAL BUISNESS MACHINES CORPORATION
    Inventor: Ahmed M. Gheith
  • Publication number: 20100262779
    Abstract: Technologies are generally described herein for supporting program and data annotation for hardware customization and energy optimization. A code block to be annotated may be examined and a hardware customization may be determined to support a specified quality of service level for executing the code block with reduced energy expenditure. Annotations may be determined as associated with the determined hardware customization. An annotation may be provided to indicate using the hardware customization while executing the code block. Examining the code block may include one or more of performing a symbolic analysis, performing an empirical observation of an execution of the code block, performing a statistical analysis, or any combination thereof. A data block to be annotated may also be examined. One or more additional annotations to be associated with the data block may be determined.
    Type: Application
    Filed: April 14, 2009
    Publication date: October 14, 2010
    Inventor: Miodrag Potkonjak
  • Publication number: 20100262780
    Abstract: Aspects relate to apparatus and methods for rending a page on a computing device, such as a web page. The apparatus and methods include receiving a request for a requested instance of a page and determining if the requested instance of the page corresponds to a document object model (DOM) for the page stored in a memory. Further, the apparatus and methods include retrieving a dynamic portion of the DOM corresponding to the requested instance if the requested instance of the page corresponds to the DOM stored in the memory. The dynamic portion may be unique to the requested instance of the page. Moreover, the apparatus and methods include storing the dynamic portion of the DOM corresponding to the requested instance of the page in a relationship with the static portion of the DOM.
    Type: Application
    Filed: March 24, 2010
    Publication date: October 14, 2010
    Inventors: Michael P. MAHAN, Chetan S. DHILLON, Wendell RUOTSI, Vikram MANDYAM
  • Publication number: 20100262778
    Abstract: In response to a data request, a victim cache line is selected for castout from a lower level cache, and a target lower level cache of one of the plurality of processing units is selected. A determination is made whether the selected target lower level cache has provided more than a threshold number of retry responses to lateral castout (LCO) commands of the first lower level cache, and if so, a different target lower level cache is selected. The first processing unit thereafter issues a LCO command on the interconnect fabric. The LCO command identifies the victim cache line to be castout and indicates that the target lower level cache is an intended destination of the victim cache line. In response to a successful coherence response to the LCO command, the victim cache line is removed from the first lower level cache and held in the second lower level cache.
    Type: Application
    Filed: April 9, 2009
    Publication date: October 14, 2010
    Applicant: International Business Machines Corporation
    Inventors: Robert A. Cargnoni, Guy L. Guthrie, Harmony L. Helterhoff, William J. Starke, Jeffrey A. Stuecheli, Phillip G. Williams
  • Publication number: 20100262777
    Abstract: A storage apparatus 10 provides, in a dynamic provisioning system, a virtual logical device (DP-LDEV 203) that is a virtual logical device configured of a real logical device (N-LDEV 201). In the storage apparatus 10, a storage area of a real logical device is managed by being divided into unit cache areas (SLCBs), which are predetermined management units. A storage area of a virtual logical device is managed by being divided into virtual unit areas (PSCBs), which are predetermined management units. Multiple virtual unit areas having the same data stored therein are made to correspond to the same unit cache area, and thereby data stored in a storage device 15 is managed. The correspondence is established with at the timing of, for example, destaging data from a cache memory 13.
    Type: Application
    Filed: February 4, 2009
    Publication date: October 14, 2010
    Inventors: Takashi Kaga, Yoshihiro Asaka
  • Publication number: 20100262785
    Abstract: Systems and methods which provide an extensible caching framework are disclosed. These systems and methods may provide a caching framework which can evaluate individual parameters of a request for a particular piece of content. Modules capable of evaluating individual parameters of an incoming request may be added and removed from this framework. When a request for content is received, parameters of the request can be evaluated by the framework and a cache searched for responsive content based upon this evaluation. If responsive content is not found in the cache, responsive content can be generated and stored in the cache along with associated metadata and a signature formed by the caching framework. This signature may aid in locating this content when a request for similar content is next received.
    Type: Application
    Filed: June 21, 2010
    Publication date: October 14, 2010
    Inventors: N. Isaac Rajkumar, Puhong You, David Dean Caldwell, Brett J. Larsen, Jamshld Afshar, Conleth O'Connell
  • Publication number: 20100257307
    Abstract: A data management method for a flash memory storage system having a cache memory is provided. The data management method includes writing data into a flash memory when a write command is executed, and determining currently a state of all the writing data which is temporarily stored in the cache memory. Wherein, if the state indicates that a time for writing all the writing data temporarily stored in the cache memory into a flash memory may exceed an upper limit processing time, a portion of the writing data temporarily stored in the cache memory is first written into the flash memory. Accordingly, the data management method may effectively avoid a delay caused by a flush command issued from the host for flushing the cache memory.
    Type: Application
    Filed: June 8, 2009
    Publication date: October 7, 2010
    Applicant: PHISON ELECTRONICS CORP
    Inventors: CHIEN-HUA CHU, Chih-Kang Yeh
  • Publication number: 20100257315
    Abstract: A system and method are disclosed for providing efficient data storage. A plurality of data segments is received in a data stream. The system determines whether a data segment has been stored previously in a low latency memory. In the event that the data segment is determined to have been stored previously, an identifier for the previously stored data segment is returned.
    Type: Application
    Filed: June 21, 2010
    Publication date: October 7, 2010
    Applicant: EMC CORPORATION
    Inventors: Ming Benjamin Zhu, Kai Li, R. Hugo Patterson
  • Patent number: 7809883
    Abstract: Embodiments of the invention may improve read operations for fully cached workloads on storage systems with limited processing or CPU-cache resources. Some embodiments employ an indicator such as a counter to indicate when the use of readahead analysis steps, such as resource, intensive predictive processing, is undesirable. In these embodiments, the counter is incremented for each buffer cache read that is successfully performed without the need for a disk input/output operation. When the counter variable exceeds a threshold such as, for example, a maximum readahead size, then the system advantageously foregoes predictive processing steps of the readahead analysis phase, and further foregoes a readahead execution phase. The foregoing results in a net performance benefit for the system based on a reduced likelihood of a need for an input/output operation, and further, based on a reduced likelihood of a need for predictive processing relating to readahead analysis and/or execution.
    Type: Grant
    Filed: October 16, 2007
    Date of Patent: October 5, 2010
    Assignee: NetApp, Inc.
    Inventors: Robert Fair, Grace Ho
  • Publication number: 20100250827
    Abstract: An apparatus for interfacing between a CPU and Flash memory units, enabling optimized sequential access to the Flash memory units. The apparatus interfaces between the address, control and data buses of the CPU and the address, control and data lines of the Flash memory units. The apparatus anticipates the subsequent memory accesses, and interleaves them between the Flash memory units. An optimization of the read access is therefore provided, thereby improving Flash memory throughput and reducing the latency. Specifically, the apparatus enables improved Flash access in embedded CPUs incorporated in a System-On-Chip (SOC) device.
    Type: Application
    Filed: March 26, 2009
    Publication date: September 30, 2010
    Applicant: Scaleo Chip
    Inventors: Pascal Jullien, Cedric Chillie