With A Network Or Matrix Configuration (epo) Patents (Class 711/E12.025)
-
Patent number: 11822821Abstract: Techniques are provided for implementing write ordering for persistent memory. A set of actions are identified for commitment to persistent memory of a node for executing an operation upon the persistent memory. An episode is created to comprise a first subset of actions of the set of actions that can be committed to the persistent memory in any order with respect to one another such that a consistent state of the persistent memory can be reconstructed in the event of a crash of the node during execution of the operation. The first subset of actions within the episode are committed to the persistent memory and further execution of the operation is blocked until the episode completes.Type: GrantFiled: November 29, 2021Date of Patent: November 21, 2023Assignee: NetApp, Inc.Inventors: Ram Kesavan, Matthew Fontaine Curtis-Maury, Abdul Basit, Vinay Devadas, Ananthan Subramanian
-
Patent number: 11740788Abstract: Method and apparatus for performing an operation are described. A method includes choosing at least one primary logical hierarchical data space. The at least one primary logical hierarchical data space may have a plurality of subdivisions. The method may further include determining at least one subdivision of the at least one primary logical hierarchical data space. The method may further include choosing at least one secondary logical hierarchical data space. The at least one secondary logical hierarchical data space may have a plurality of subdivisions. The method may further include determining at least one subdivision of the at least one secondary logical hierarchical data space. The method may further include performing at least one operation corresponding to the at least one subdivision of the at least one primary logical hierarchical data space.Type: GrantFiled: June 24, 2022Date of Patent: August 29, 2023Assignee: Craxel, Inc.Inventor: David Enga
-
Patent number: 9985751Abstract: A node device (1A) receives a second ACK list in communication with an adjacent node (1B) and updates a first ACK list and a first summary vector on the basis of the second ACK list. The first summary vector, which indicates messages stored in a message buffer of the node device (1A), is transmitted to the adjacent node (1B) prior to transmission of the messages in the communication with the adjacent node (1B). The first ACK list indicates ACK messages recognized by the node device (1A). The second ACK list indicates ACK messages recognized by the adjacent node (1B). Each ACK message represents a message of arrival at an ultimate destination node via a DTN (100). In this way, for example, a copy having the same content as a message having arrived at an ultimate destination node can be prevented from being scattered in the Disruption Tolerant Network (DTN).Type: GrantFiled: January 27, 2015Date of Patent: May 29, 2018Assignee: NEC CORPORATIONInventors: Masato Kudou, Hisashi Mizumoto
-
Patent number: 9298847Abstract: Systems and methods for configuring software systems help guarantee correctness while providing operational flexibility and minimal time to recover. A configuration system uses a un-typed syntax tree with transactional semantics layered under a set of late-bound semantic constraints. Configuration settings are defined in property flies which are parsed into a syntax tree during startup and which can be safely reloaded. Semantic constraints are dynamically specified by software components. The system maintains a transactional history to enable rollback and inspection.Type: GrantFiled: December 20, 2013Date of Patent: March 29, 2016Assignee: EMC CorporationInventor: Henning K. Rohde
-
Patent number: 8949533Abstract: The present invention prdvides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled.Type: GrantFiled: February 5, 2010Date of Patent: February 3, 2015Assignee: Telefonaktiebolaget L M Ericsson (publ)Inventors: Hareesh Puthalath, Stefan Hellkvist, Lars-Örjan Kling
-
Patent number: 8868818Abstract: A method for associating a physical address with a logical communication address for an Ethernet-connected media drive (22A1-22A4, 22B1-22B4) in a media library assembly (10) includes the steps of providing a first media drive (22A1) having a first physical address; sending a request for a first logical communication address from the first media drive (22A1) to a library controller (16) via an Ethernet switch (18) the first physical address being imbedded in the request; and recording the first physical address with the Ethernet switch (18). The method can include associating the first physical address with one of a plurality of Ethernet switch ports (26A1-26A4, 26B1-26B4) using a mapping server (25) of the library controller (16). The method can include searching a routing table (228) of the Ethernet switch (18) with the library controller (16) to determine the first physical address.Type: GrantFiled: March 3, 2009Date of Patent: October 21, 2014Assignee: Quantum CorporationInventors: Daniel J. Byers, Don Doerner, Michael Jones, Sanam Mittal, Jeff Szmyd
-
Patent number: 8856445Abstract: Methods and apparatus are provided for performing byte caching using a chunk size based on the object type of the object being cached. Byte caching is performed by receiving at least one data packet from at least one network node; extracting at least one data object from the at least one data packet; identifying an object type associated with the at least one data packet; determining a chunk size associated with the object type; and storing at least a portion of the at least one data packet in a byte cache based on the determined chunk size. The chunk size of the object type can be determined, for example, by evaluating one or more additional criteria, such as network conditions and object size. The object type may be, for example, an image object type; an audio object type; a video object type; and a text object type.Type: GrantFiled: May 24, 2012Date of Patent: October 7, 2014Assignee: International Business Machines CorporationInventors: Dakshi Agrawal, Franck Le, Vasileios Pappas, Mudhakar Srivatsa, Dinesh C. Verma
-
Patent number: 8832375Abstract: One or more embodiments perform byte caching. At least one data packet is received from at least one network node. At least one data object is received from the at least one data packet. An object type associated with the at least one data object is identified. The at least one data object is divided into a plurality of byte sequences based on the object type that is associated with the at least one data object. At least one byte sequence in the plurality of byte sequences is stored into a byte cache.Type: GrantFiled: May 24, 2012Date of Patent: September 9, 2014Assignee: International Business Machines CorporationInventors: Dakshi Agrawal, Thai V. Le, Vasileios Pappas, Mudhakar Srivatsa, Dinesh Verma
-
Patent number: 8806134Abstract: Methods of protecting cache data are provided. For example, various methods are described that assist in handling dirty write data cached in memory by duplication into other locations to protect against data loss. One method includes caching a data item from a data source in a first cache device. The data item cached in the first cache device is designated with a first designation. In response to the data item being modified by a data consumer, the designation of the data item in the first cache device is re-assigned from the first designation to a second designation, and the data item with the second designation is copied to a second cache device.Type: GrantFiled: April 12, 2011Date of Patent: August 12, 2014Assignee: PMC-Sierra US, Inc.Inventors: Jonathan Flower, Nadesan Narenthiran
-
Patent number: 8745329Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion.Type: GrantFiled: January 20, 2011Date of Patent: June 3, 2014Assignee: Google Inc.Inventors: Andrew Kadatch, Lawrence E. Greenfield
-
Patent number: 8521957Abstract: Disclosed is a computer system that includes a first apparatus, which stores data and metadata in a storage, and multiple units of a second apparatus, which store a copy of data and metadata in the first apparatus in a cache. The first apparatus acquires throughput achieved when the units of the second apparatus access the data in the storage as first access information, acquires throughput achieved when the units of the second apparatus access data thereof as second access information, and selects either a first judgment mode or a second judgment mode in accordance with the first access information and the second access information. This reduces the amount of network traffic for metadata acquisition, thereby increasing the speed of data access.Type: GrantFiled: July 11, 2011Date of Patent: August 27, 2013Assignee: Hitachi, Ltd.Inventors: Hitoshi Hayakawa, Daisuke Ito, Yuji Tsushima
-
Patent number: 8458440Abstract: One embodiment of the present invention sets forth a technique for computing virtual addresses for accessing thread data. Components of the complete virtual address for a thread group are used to determine whether or not a cache line corresponding to the complete virtual address is not allocated in the cache. Actual computation of the complete virtual address is deferred until after determining that a cache line corresponding to the complete virtual address is not allocated in the cache.Type: GrantFiled: August 17, 2010Date of Patent: June 4, 2013Assignee: NVIDIA CorporationInventor: Michael C. Shebanow
-
Patent number: 8438363Abstract: A system, method and computer program product for virtualizing a processor include a virtualization system running on a computer system and controlling memory paging through hardware support for maintaining real paging structures. A Virtual Machine (VM) is running guest code and has at least one set of guest paging structures that correspond to guest physical pages in guest virtualized linear address space. At least some of the guest paging structures are mapped to the real paging structures. A cache of connection structures represents cached paths to the real paging structures. The mapped paging tables are protected using RW-bit. A paging cache is validated according to TLB resets. Non-active paging tree tables can be also protected at the time when they are activated. Tracking of access (A) bits and of dirty (D) bits is implemented along with synchronization of A and D bits in guest physical pages.Type: GrantFiled: April 30, 2012Date of Patent: May 7, 2013Assignee: Parallels IP Holdings GmbHInventors: Alexey B. Koryakin, Alexander G. Tormasov, Nikolay N. Dobrovolskiy, Serguei M. Beloussov, Andrey A. Omelyanchuk
-
Publication number: 20130073808Abstract: The present invention provides a method and a caching node entity for ensuring at least a predetermined number of a content object to be kept stored in a network, comprising a plurality of cache nodes for storing copies of content objects. The present invention makes use of ranking states values, deletable or non-deletable, which when assigned to copies of content objects are indicating whether a copy is either deletable or non-deletable. At least one copy of each content object is assigned the value non-deletable. The value for a copy of a content object changing from deletable to non-deletable in one cache node of the network, said copy being a candidate for the value non-deletable, if a certain condition is fulfilled.Type: ApplicationFiled: February 5, 2010Publication date: March 21, 2013Inventors: Hareesh Puthalath, Stefan Hellkvist, Lars-Örjan Kling
-
Publication number: 20130046935Abstract: A copy cache feature that can be shared across networked devices is provided. Content added to copy cache through a “copy”, a “like”, or similar command through one device may be forwarded to a server providing cloud-based services to a user and/or another device associated with the user such that the content can be inserted into the same or other files on other computing devices by the user. In addition to seamless movement of copy cache content across devices, the content may be made available in a context-based manner and/or sortable manner.Type: ApplicationFiled: August 18, 2011Publication date: February 21, 2013Applicant: MICROSOFT CORPORATIONInventor: Rajesh Ramanathan
-
Publication number: 20130024622Abstract: Systems and methods for invalidating and regenerating pages. In one embodiment, a method can include detecting content changes in a content database including various objects. The method can include causing an invalidation generator to generate an invalidation based on the modification and communicating the invalidation to a dependency manager. A cache manager can be notified that pages in a cache might be invalidated based on the modification via a page invalidation notice. In one embodiment, a method can include receiving a page invalidation notice and sending a page regeneration request to a page generator. The method can include regenerating the cached page. The method can include forwarding the regenerated page to the cache manager replacing the cached page with the regenerated page. In one embodiment, a method can include invalidating a cached page based on a content modification and regenerating pages which might depend on the modified content.Type: ApplicationFiled: September 14, 2012Publication date: January 24, 2013Inventors: John H. Martin, Matthew Helgren, Kin-Chung Fung, Mark R. Scheevel
-
Publication number: 20120254543Abstract: The invention relates to a method and entity that allow for saving of uplink bandwidth in connection with peer-to-peer sharing in a wireless communication system. A caching entity, called a reverse cache, intercepts a point-to-point connection between a mobile network user plane gateway and a wireless user equipment running a peer-to-peer application. The reverse cache caches content loaded to the peer-to-peer application and stores information indicative of the wireless user equipment to which the cached content is loaded. A request on the point-to-point connection for delivery of a first content from the wireless user equipment is intercepted by the reverse cache. When the requested first content is cached in the reverse cache along with information indicating that the requested first content has been loaded to the wireless user equipment, the reverse cache responds by delivering the requested first content, without involving the wireless user equipment.Type: ApplicationFiled: April 4, 2011Publication date: October 4, 2012Applicant: Telefonaktiebolaget LM Ericsson (publ)Inventors: Mathias Sintorn, Fredric Kronestedt, Jari Vikberg, Lars Westberg
-
Publication number: 20120226866Abstract: A computer-implemented method comprises obtaining a cache hit ratio for each of a plurality of virtual machines, and identifying, from among the plurality of virtual machines, a first virtual machine having a cache hit ratio that is less than a threshold ratio. The identified first virtual machine is then migrated from the first physical server having a first cache size to a second physical server having a second cache size that is greater than the first cache size. Optionally, a virtual machine having a cache hit ratio that is less than a threshold ratio is identified on a class-specific basis, such as for L1 cache, L2 cache and L3 cache.Type: ApplicationFiled: March 2, 2011Publication date: September 6, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: James J. Bozek, Nils Peter Joachim Hansson, Edward S. Suffern, James L. Wooldridge
-
Publication number: 20120191912Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for storing data on storage nodes. In one aspect, a method includes receiving a file to be stored across a plurality of storage nodes each including a cache. The is stored by storing portions of the file each on a different storage node. A first portion is written to a first storage node's cache until determining that the first storage node's cache is full. A different second storage node is selected in response to determining that the first storage node's cache is full. For each portion of the file, a location of the portion is recorded, the location indicating at least a storage node storing the portion.Type: ApplicationFiled: January 20, 2011Publication date: July 26, 2012Applicant: GOOGLE INC.Inventors: Andrew Kadatch, Lawrence E. Greenfield
-
Patent number: 8171255Abstract: A system, method and computer program product for virtualizing a processor include a virtualization system running on a computer system and controlling memory paging through hardware support for maintaining real paging structures. A Virtual Machine (VM) is running guest code and has at least one set of guest paging structures that correspond to guest physical pages in guest virtualized linear address space. At least some of the guest paging structures are mapped to the real paging structures. A cache of connection structures represents cached paths to the real paging structures. The mapped paging tables are protected using RW-bit. A paging cache is validated according to TLB resets. Non-active paging tree tables can be also protected at the time when they are activated. Tracking of access (A) bits and of dirty (D) bits is implemented along with synchronization of A and D bits in guest physical pages.Type: GrantFiled: April 20, 2010Date of Patent: May 1, 2012Assignee: Parallels IP Holdings GmbHInventors: Alexey B. Koryakin, Alexander G. Tormasov, Nikolay N. Dobrovolskiy, Serguei M. Beloussov, Andrey A. Omelyanchuk
-
Publication number: 20120102134Abstract: A method, system and computer program product for cache sharing among branch proxy servers. A branch proxy sever receives a request for accessing a resource at a data center. The branch proxy server creates a cache entry in its cache to store the requested resource if the branch proxy server does not store the requested resource. Upon creating the cache entry, the branch proxy server sends the cache entry to a master proxy server at the data center to transfer ownership of the cache entry if the master proxy server did not store the resource in its cache. When the resource becomes invalid or expired, the master proxy server informs the appropriate branch proxy servers storing the resource to purge the cache entry containing this resource. In this manner, the master proxy server ensures that the cached resource is synchronized across the branch proxy servers storing this resource.Type: ApplicationFiled: October 21, 2010Publication date: April 26, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Erik J. Burckart, John P. Cammarata, Andrew J. Ivory, Aaron K. Shook
-
Publication number: 20120059994Abstract: Provided are a method, system, and computer program product for using a migration cache to cache tracks during migration. Indication is made in an extent list of tracks in an extent in a source storage subject to Input/Output (I/O) requests. A migration operation is initiated to migrate the extent from the source storage to a destination storage. In response to initiating the migration operation, a determination is made of a first set of tracks in the extent in the source storage indicated in the extent list. A determination is also made of a second set of tracks in the extent. The tracks in the source storage in the first set are copied to a migration cache, wherein updates to the tracks in the migration cache during the migration operation are applied to the migration cache. The tracks in the second set are copied directly from the source storage to the destination storage without buffering in the migration cache. The tracks in the first set are copied from the migration cache to the destination storage.Type: ApplicationFiled: September 8, 2010Publication date: March 8, 2012Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: David Montgomery, Todd Charles Sorenson
-
Publication number: 20110314228Abstract: Maintaining cache coherence in a multi-node, symmetric multiprocessing computer, the computer composed of a plurality of compute nodes, including, broadcasting upon a cache miss by a first compute node a request for a cache line; transmitting from each of the other compute nodes to all other nodes the state of the cache line on that node, including transmitting from any compute node having a correct copy to the first node the correct copy of the cache line; and updating by each node the state of the cache line in each node, in dependence upon one or more of the states of the cache line in all the nodes.Type: ApplicationFiled: June 16, 2010Publication date: December 22, 2011Applicant: INTERNATIONAL BUSINESS MACHINES CORPORATIONInventors: Michael A. Blake, Garrett M. Drapala, Pak-Kin Mak, Vesselina K. Papazova, Craig R. Walters
-
Publication number: 20110289257Abstract: A request for reading data from a memory location of a main memory is received, the memory location being identified by a physical memory address. In response to the request, a cache memory is accessed based on the physical memory address to determine whether the cache memory contains the data being requested. The data associated with the request is returned from the cache memory without accessing the memory location if there is a cache hit. The data associated is returned from the main memory if there is a cache miss. In response to the cache miss, it is determined whether there have been a number of accesses within a predetermined period of time. A cache entry is allocated from the cache memory to cache the data if there have been a predetermined number of accesses within the predetermined period of time.Type: ApplicationFiled: May 20, 2010Publication date: November 24, 2011Applicant: TELEFONAKTIEBOLAGET L M ERICSSON (PUBL)Inventors: Robert Hathaway, Evan Gewirtz
-
Publication number: 20110258424Abstract: A distributive cache accessing device for accelerating to boot remote diskless computers mounted in a diskless computer equipped with WAN-bootable hardware, such as an iSCSI host bus adapter (HBA), allows to access data required to boot the diskless computers or run application programs thereon from an iSCSI target or other diskless computers having the distributive cache accessing device via a network. The retrieved iSCSI data blocks are temporarily stored in the local distributive cache accessing device. If any other diskless computer requests for the iSCSI data blocks, the temporarily stored iSCSI data blocks can be accessible to the diskless computer. Given installation of large number of diskless computers, the network traffic of the iSCSI target is alleviated and booting remote diskless computer is accelerated.Type: ApplicationFiled: April 14, 2010Publication date: October 20, 2011Inventors: Chia-Hsin Huang, Chao-Jui Hsu, Wun-Yuan Kuo, Wei-Yu Chen, Yu-Yen Chen, Sia-Mor Yeoh
-
Publication number: 20110258393Abstract: Methods of protecting cache data are provided. For example, various methods are described that assist in handling dirty write data cached in memory by duplication into other locations to protect against data loss. One method includes caching a data item from a data source in a first cache device. The data item cached in the first cache device is designated with a first designation. In response to the data item being modified by a data consumer, the designation of the data item in the first cache device is re-assigned from the first designation to a second designation, and the data item with the second designation is copied to a second cache device.Type: ApplicationFiled: April 12, 2011Publication date: October 20, 2011Inventors: Jonathan Flower, Nadesan Narenthiran
-
Publication number: 20110238916Abstract: An apparatus and a method for accessing data at a server node of a data grid system with distributed cache is described. The server receives a request to access a logical tree structure of a cache nodes at a tree structure interface module of the server. The tree structure interface operates on a flat map structure of the cache nodes corresponding to the logical tree structure, transparent to the request. Each cache node is defined and operated on using a two-dimensional coordinate including a fully qualified name and a type.Type: ApplicationFiled: March 26, 2010Publication date: September 29, 2011Inventor: Manik Surtani
-
Publication number: 20110213931Abstract: An apparatus and a method operating on data at a server node of a data grid system with distributed cache is described. A coordinator receives a request to change a topology of a cache cluster from a first group of cache nodes to a second group of cache nodes. The request includes a cache node joining or leaving the first group. A key for the second group is rehashed without blocking access to the first group while rehashing.Type: ApplicationFiled: February 26, 2010Publication date: September 1, 2011Inventor: Manik Surtani
-
Publication number: 20110197032Abstract: A data recipient configured to access a data source may exhibit improved performance by caching data items received from the data source. However, the cache may become stale unless the data recipient is informed of data source updates. Many subscription mechanisms are specialized for the particular data recipient and/or data source, which may cause an affinity of the data recipient for the data source, thereby reducing scalability of the data sources and/or data recipients. A cache synchronization service may accept requests from data recipients to subscribe to the data source, and may promote cache freshness by notifying subscribers when particular data items are updated at the data source. Upon detecting an update of the data source involving one or more data items, the cache synchronization service may request each subscriber of the data source to remove the stale cached representation of the updated data item(s) from its cache.Type: ApplicationFiled: February 8, 2010Publication date: August 11, 2011Applicant: Microsoft CorporationInventor: Eric M. Patey
-
Publication number: 20110035553Abstract: Systems and methods for managing cached content are disclosed. More particularly, embodiments disclosed herein may allow cached content to be updated (e.g. regenerated or replaced) in response to a notification. Specifically, embodiments disclosed herein may process a notification pertaining to content stored in a cache. Processing the notification may include locating cached content associated with the notification. After the cached content which corresponds to the notification is found, an appropriate action may be taken. For example, the cached content may be flushed from the cache or a request may be regenerated. As a result of the action, new content is generated. This new content is then used to replace or update the cached content.Type: ApplicationFiled: October 14, 2010Publication date: February 10, 2011Inventors: Lee Shepstone, Conleth S. O'Connell, JR., Mark R. Scheevel, Newton Isaac Rajkumar, Jamshid Afshar, JR., Puhong You, Brett J. Larsen, David Dean Caldwell
-
Publication number: 20110004729Abstract: Methods, apparatuses, and systems directed to the caching of blocks of lines of memory in a cache-coherent, distributed shared memory system. Block caches used in conjunction with line caches can be used to store more data with less tag memory space compared to the use of line caches alone and can therefore reduce memory requirements. In one particular embodiment, the present invention manages this caching using a DSM-management chip, after the allocation of the blocks by software, such as a hypervisor. An example embodiment provides processing relating to block caches in cache-coherent distributed shared memory.Type: ApplicationFiled: December 19, 2007Publication date: January 6, 2011Applicant: 3Leaf Systems, Inc.Inventors: Isam Akkawi, Najeeb Imran Ansari, Bryan Chin, Chetana Nagendra Keltcher, Krishnan Subramani, Janakiramanan Vaidyanathan
-
Patent number: 7849260Abstract: Proposed is a storage controller and its control method for speeding up the processing time in response to a command in a simple manner while reducing the load of a controller that received a command targeting a non-associated logical volume. This storage controller includes a plurality of controllers for controlling the input and output of data to and from a corresponding logical unit based on a command retained in a local memory, and the local memory stores association information representing the correspondence of the logical units and the controllers and address information of the local memory in each of the controllers of a self-system and another-system.Type: GrantFiled: January 26, 2007Date of Patent: December 7, 2010Assignee: Hitachi, Ltd.Inventors: Takahide Okuno, Mitsuhide Sato, Toshiaki Minami, Hiroaki Yuasa, Kousuke Komikado, Koji Iwamitsu, Tetsuya Shirogane, Atsushi Ishikawa
-
Publication number: 20100281217Abstract: The present invention is directed towards a method and system for modifying by a cache responses from a server that do not identify a dynamically generated object as cacheable to identify the dynamically generated object to a client as cacheable in the response. In some embodiments, such as an embodiment handling HTTP requests and responses for objects, the techniques of the present invention insert an entity tag, or “etag” into the response to provide cache control for objects provided without entity tags and/or cache control information from an originating server. This technique of the present invention provides an increase in cache hit rates by inserting information, such as entity tag and cache control information for an object, in a response to a client to enable the cache to check for a hit in a subsequent request.Type: ApplicationFiled: July 16, 2010Publication date: November 4, 2010Inventors: Prabakar Sundarrajan, Prakash Khemani, Kailash Kailash, Ajay Soni, Rajiv Sinha, Saravana Annamalaisami, Bharath Bhushan K R, Anil Kumar
-
Publication number: 20100228835Abstract: A cache apparatus for a network receives and responds to network file-services-protocol requests from client workstations coupled to the network. The cache apparatus includes a digital memory for storing data transmitted in responding to the network requests. A processing unit executes program instructions. A network interface couples the cache apparatus to the network. The interface includes program instructions, executed by the processing unit, for receiving the requests and transmitting responses thereto. A file-request-service module includes program instructions, executed by the processing unit, for interpreting the requests and generating responses thereto. The file-request-service module also checks the memory for the presence of an image of data specified by the request. When the data is present, the file-request-service module retrieves the data for inclusion in the response.Type: ApplicationFiled: October 28, 2009Publication date: September 9, 2010Inventor: William Michael Pitts
-
Publication number: 20100185745Abstract: A cache module (26) at a client computer (12) controls a cache portion (28) on a storage device (24). The cache module communicates with other cache modules at other clients to form a cache community (15). The cache modules store World Wide Web or other content in the cache portions for retrieval in response to requests (32) for content from browsers (30) in the cache community. When the requested content is not available in the cache community, the requested content may be retrieved from an origin server (19) using the Internet.Type: ApplicationFiled: March 29, 2010Publication date: July 22, 2010Applicant: Parallel Networks, LLCInventors: Keith A. Lowery, Bryan S. Chin, David A. Consolver, Gregg A. DeMasters
-
Patent number: 7734869Abstract: Interfaces for flexible storage management. An embodiment of a system includes a data storage, the data storage including one or more of a first storage system, the first storage system including a file structure that is coextensive with a set of memory devices, or a second storage system, the second storage system including a storage structure that is coextensive with a set of memory devices, the storage structure including zero or more file structures. The system further includes an interface system for the data storage, the interface system being used for both the first storage system and the second storage system.Type: GrantFiled: April 28, 2005Date of Patent: June 8, 2010Assignee: Netapp, Inc.Inventor: Edward Ramon Zayas
-
Publication number: 20100110934Abstract: A spanning tree is assigned to a processing node for each processing node in a point-to-point network that connects a plurality of processing nodes. The spanning tree uses the processing nodes as vertices and links of the network as edges. Each processing node includes input snoop ports that can be configured as either terminating or forwarding. According to the assigned spanning trees and the configuration of the input snoop ports, the network routes snoop messages efficiently and without conflicts.Type: ApplicationFiled: September 20, 2006Publication date: May 6, 2010Inventors: Yufu Li, Xiaohua Cai
-
Publication number: 20090150511Abstract: A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The network also includes a plurality of local cache managers, each of which are associated with a different portion of the clustered memory cache, and a metadata service operatively coupled with the local cache managers. Also, a plurality of clients are operatively coupled with the metadata service and the local cache managers. In response to a request issuing from any of the clients for a data item present in the clustered memory cache, the metadata service is configured to respond with identification of the local cache manager associated with the portion of the clustered memory cache containing such data item.Type: ApplicationFiled: November 6, 2008Publication date: June 11, 2009Applicant: RNA NETWORKS, INC.Inventors: Jason P. Gross, Ranjit B. Pandit, Clive G. Cook, Thomas H. Matson
-
Publication number: 20090144388Abstract: A computer network with distributed shared memory, including a clustered memory cache aggregated from and comprised of physical memory locations on a plurality of physically distinct computing systems. The clustered memory cache is accessible by a plurality of clients on the computer network and is configured to perform page caching of data items accessed by the clients. The network also includes a policy engine operatively coupled with the clustered memory cache, where the policy engine is configured to control where data items are cached in the clustered memory cache.Type: ApplicationFiled: November 6, 2008Publication date: June 4, 2009Applicant: RNA NETWORKS, INC.Inventors: Jason P. Gross, Ranjit B. Pandit, Clive G. Cook, Thomas H. Matson
-
Publication number: 20090089512Abstract: In a network-based cache-coherent multiprocessor system, when a node receives a cache request, the node can perform an intra-node cache snoop operation and forward the cache request to a subsequent node in the network. A snoop-and-forward prediction mechanism can be used to predict whether lazy forwarding or eager forwarding is used in processing the incoming cache request. With lazy forwarding, the node cannot forward the cache request to the subsequent node until the corresponding intra-node cache snoop operation is completed. With eager forwarding, the node can forward the cache request to the subsequent node immediately, before the corresponding intra-node cache snoop operation is completed. Furthermore, the snoop-and-forward prediction mechanism can be enhanced seamlessly with an appropriate snoop filter to avoid unnecessary intra-node cache snoop operations.Type: ApplicationFiled: July 21, 2008Publication date: April 2, 2009Inventors: Xiaowei Shen, Karin Strauss