Plural Shared Memories Patents (Class 709/214)
  • Patent number: 9176894
    Abstract: Resource management techniques, such as cache optimization, are employed to organize resources within caches such that the most requested content (e.g., the most popular content) is more readily available. A service provider utilizes content expiration data as indicative of resource popularity. As resources are requested, the resources propagate through a cache server hierarchy associated with the service provider. More frequently requested resources are maintained at edge cache servers based on shorter expiration data that is reset with each repeated request. Less frequently requested resources are maintained at higher levels of a cache server hierarchy based on longer expiration data associated with cache servers higher on the hierarchy.
    Type: Grant
    Filed: July 14, 2014
    Date of Patent: November 3, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Bradley Eugene Marshall, Swaminathan Sivasubramanian, David R. Richardson
  • Patent number: 9166939
    Abstract: System and methods for uploading media content in an instant messaging application are disclosed. In some implementations, a method includes, at a first electronic device having one or more processors and memory: obtaining a request by a first user to associate a media content item with a conversation between the first user and a second user. The conversation includes a message and one or more responses to the message. In some implementations, the method also includes, responsive to the request: (i) causing a thumbnail of the media content item to be displayed in-line in the conversation; and (ii) causing the media content item to be display to the second user, in response to a predefined user action by the second user. The thumbnail is displayed in a first resolution; and the media content item is displayed in a second resolution that is different than the first resolution.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: October 20, 2015
    Assignee: Google Inc.
    Inventors: Lars Eilstrup Rasmussen, Eric Christopher Seidel
  • Patent number: 9160803
    Abstract: Embodiments of the present invention provide a method, system and computer program product for Web storage optimization and cache management. In one embodiment, a method of client side cache management using Web storage can include first registering a client browser session in a content browser as a listener to events for Web storage for a particular domain. Subsequently, notification can be received from the content browser of an event of a different client browser session associated with the Web storage. For instance, the notification can result from the different client browser adding a new cache entry to the Web storage, or from the different client browser periodically at a specified time interval indicating a state of one or more cache entries in the Web storage. Finally, in response to the notification, a cache entry in the Web storage can be invalided such as through cache entry removal or compression.
    Type: Grant
    Filed: June 21, 2012
    Date of Patent: October 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
  • Patent number: 9160804
    Abstract: Embodiments of the present invention provide a method, system and computer program product for Web storage optimization and cache management. In one embodiment, a method of client side cache management using Web storage can include first registering a client browser session in a content browser as a listener to events for Web storage for a particular domain. Subsequently, notification can be received from the content browser of an event of a different client browser session associated with the Web storage. For instance, the notification can result from the different client browser adding a new cache entry to the Web storage, or from the different client browser periodically at a specified time interval indicating a state of one or more cache entries in the Web storage. Finally, in response to the notification, a cache entry in the Web storage can be invalided such as through cache entry removal or compression.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: October 13, 2015
    Assignee: International Business Machines Corporation
    Inventors: Erik J. Burckart, Andrew J. Ivory, Todd E. Kaplinger, Aaron K. Shook
  • Patent number: 9152585
    Abstract: Embodiments of a memory system that communicates bidirectional data between a memory controller and a memory IC via bidirectional links using half-duplex communication are described. Each of the bidirectional links conveys write data or read data, but not both. States of routing circuits in the memory controller and the memory IC are selected for a current command being processed so that data can be selectively routed from a queue in the memory controller to a corresponding bank set in the memory IC via one of the bidirectional links, or to another queue in the memory controller from a corresponding bank set in the memory IC via another of the bidirectional links. This communication technique reduces or eliminates the turnaround delay that occurs when the memory controller transitions from receiving the read data to providing the write data, thereby eliminating gaps in the data streams on the bidirectional links.
    Type: Grant
    Filed: February 2, 2010
    Date of Patent: October 6, 2015
    Assignee: Rambus Inc.
    Inventor: Frederick A. Ware
  • Patent number: 9135122
    Abstract: Performing data backup for a client includes receiving, at a host other than the client, volume information including data indicating a physical data storage location of at least a part of a volume comprising one or more stored objects associated with the client; and determining at the host, based at least in part on the volume information, a stored object information for a stored object included in the volume, the stored object information including data associated with a physical data storage location of the stored object.
    Type: Grant
    Filed: March 25, 2011
    Date of Patent: September 15, 2015
    Assignee: EMC Corporation
    Inventors: Thomas L. Dings, Jacob M. Jacob, Subramanian Periyagaram, Pashupati Kumar, Robert W. Toop
  • Patent number: 9137036
    Abstract: Provided are a method and apparatus for notifying a specific device, which requests a service, of an event if the event relating to the service occurs in a home network. An event notification message that is multicasted in the home network includes a device ID that requests a service relating to an event, and devices discard the event notification message when the device ID included in the event notification message is not identical to IDs of the devices, thereby preventing all devices that are not related to the service from being notified of the event relating to the service.
    Type: Grant
    Filed: January 23, 2009
    Date of Patent: September 15, 2015
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Ho Jin, In-chul Hwang, Mun-jo Kim
  • Patent number: 9128952
    Abstract: Disclosed are a method and a system for transmitting a file folder from the sending end to the receiving end. The system uses a file folder transmission unit at the sending end side to generate a directory structure file of the file folder. The directory structure file may have properties such as the size of the file folder, paths and path lengths of the files in the file folder. The sending end then sends the directory structure file to the receiving end through the file folder transmission unit to allow the system to determine which files in the file folder need to be transmitted. The needed files in the file folder are then transmitted to the receiving end according to the determination. The sending end and the receiving end may communicate using an instant messaging tool. The disclosed method and system allow a faster and more convenient file folder network transmission.
    Type: Grant
    Filed: June 13, 2013
    Date of Patent: September 8, 2015
    Assignee: Alibaba Group Holding Limited
    Inventor: Zhenguo Bai
  • Patent number: 9122763
    Abstract: A request for a web page is received from a client device at a web server. At least a source web application and a target web application relating to the requested web page is identified. An output from the source web application and an output from the target web application are requested. A source style of the source web application is requested. The source style is combined with the output of the source web application and the output of the target web application into the requested web page. The requested web page is sent to the client device.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: September 1, 2015
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Matthias Falkenberg, Richard Jacob, Stephan Laertz, Carsten Leue
  • Patent number: 9122614
    Abstract: A modular block allocator receives a cleaner message requesting dirty buffers associated with an inode be cleaned. The modular block allocator provides at least one bucket cache comprising a plurality of buckets, wherein each bucket represents a plurality of free data blocks. The dirty buffers are cleaned by allocating the data blocks of one of the buckets to the dirty buffers. The allocated data blocks are mapped to a stripe set and when the stripe set is full, the stripe set is sent to a storage system. In one embodiment of the invention, a modular block allocator includes a front end module and a back end module communicating with each other via an application programming interface (API). The front end module contains write allocation policies that define how blocks are laid out on disk. The back end module creates data structures for execution of the policies.
    Type: Grant
    Filed: December 22, 2011
    Date of Patent: September 1, 2015
    Assignee: NetApp, Inc.
    Inventors: Ram Kesavan, Mrinal K. Bhattacharjee, Sudhanshu Goswami
  • Patent number: 9049100
    Abstract: A method and apparatus are described for forwarding content delivery network interconnection (CDNI) signaling. A CDNI router content delivery network (CDN) may establish CDNIs with upstream and downstream CDNs. The CDNI router CDN may receive a CDNI route advertisement message from at least one of the upstream and downstream CDNs. The CDNI router CDN may update at least one end-user-based CDNI routing table based on Internet protocol (IP) address blocks in the CDNI route advertisement message. The CDNI router CDN may transmit an updated CDNI route advertisement message to at least one of the upstream and downstream CDNs. At least one of the upstream and downstream CDNs may update at least one end-user-based CDNI routing table based on the end user IP address blocks in the updated CDNI route advertisement message.
    Type: Grant
    Filed: October 12, 2012
    Date of Patent: June 2, 2015
    Assignee: INTERDIGITAL PATENT HOLDINGS, INC.
    Inventors: Xavier De Foy, Hang Liu, Serhad Doken, Osama Lotfallah, Shamim Akbar Rahman
  • Patent number: 9043430
    Abstract: The technique introduced here involves using a block address and a corresponding generation number as a “fingerprint” to uniquely identify a sequence of data within a given storage domain. Each block address has an associated generation number which indicates the number of times that data at that block address has been modified. This technique can be employed, for example, to determine whether a given storage server already has the data, and to avoid sending the data to that storage server over a network if it already has the data. It can also be employed to maintain cache coherency among multiple storage nodes.
    Type: Grant
    Filed: August 12, 2013
    Date of Patent: May 26, 2015
    Assignee: NetApp, Inc.
    Inventors: Michael N. Condict, Steven R. Kleiman
  • Patent number: 9037671
    Abstract: Systems and associated methods for flexible scalability of storage systems. In one aspect, a storage controller may include an interface to a fabric adapted to permit each storage controller coupled to the fabric to directly access memory mapped components of all other storage controllers coupled to the fabric. The CPU and other master device circuits within a storage controller may directly address memory an I/O devices directly coupled thereto within the same storage controller and may use RDMA features to directly address memory an I/O devices of other storage controllers through the fabric interface.
    Type: Grant
    Filed: October 24, 2013
    Date of Patent: May 19, 2015
    Assignee: Netapp, Inc.
    Inventors: Bret S. Weber, Mohamad El-Batal, William P. Delaney
  • Patent number: 9036493
    Abstract: Network traffic information from multiple sources, at multiple time scales, and at multiple levels of detail are integrated so that users may more easily identify relevant network information. The network monitoring system stores and manipulates low-level and higher-level network traffic data separately to enable efficient data collection and storage. Packet traffic data is collected, stored, and analyzed at multiple locations. The network monitoring locations communicate summary and aggregate data to central modules, which combine this data to provide an end-to-end description of network traffic at coarser time scales. The network monitoring system enables users to zoom in on high-level, coarse time scale network performance data to one or more lower levels of network performance data at finer time scales.
    Type: Grant
    Filed: March 8, 2012
    Date of Patent: May 19, 2015
    Assignee: RIVERBED TECHNOLOGY, INC.
    Inventors: Loris Degioanni, Steven McCanne, Christopher J. White, Dmitri S. Vlachos
  • Patent number: 9032155
    Abstract: A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: May 12, 2015
    Assignee: Parallel Networks, LLC
    Inventors: Keith A. Lowery, Bryan S. Chin, David A. Consolver, Gregg A. DeMasters
  • Publication number: 20150127767
    Abstract: A method, system, and computer program product for resolving cache lookup of large pages with variable granularity are provided in the illustrative embodiments. A number of unused bits in an available number of bits is identified. The available number of bits is configured to address a page of data in memory, wherein the page exceeding a threshold size, and the page comprising a set of parts. The unused bits are mapped to the plurality of parts such that a value of the unused bits corresponds to existence of a subset of the set of parts in a memory. A virtual address is translated to a physical address of a requested part in the set of parts. A determination is made, using the unused bits, whether the requested part exists in the memory.
    Type: Application
    Filed: November 4, 2013
    Publication date: May 7, 2015
    Applicant: International Business Machines Corporation
    Inventors: AHMED GHEITH, Eric Van Hensbergen, James Xenidis
  • Patent number: 9021045
    Abstract: Embodiments of the present invention provide for a shared photo space that is synchronized among members of a social network or group. In some embodiments, users of a social group automatically pull photos directly from all registered hard drives of clients and online services and mirror them around the group, thus making the collection available to the social network of users.
    Type: Grant
    Filed: November 30, 2006
    Date of Patent: April 28, 2015
    Assignee: Red Hat, Inc.
    Inventor: Havoc Pennington
  • Patent number: 9021018
    Abstract: A method for supporting the selection of communication peers in an overlay network, wherein a multitude of communication peers participate in the overlay network by providing certain pieces of information, and wherein at least one peer-to-peer server—tracker—is provided that maintains a database of the participating communication peers and the information possessed by them, wherein the tracker, upon receiving a query regarding a specific piece of information from a communication peer—requesting client—, answers the query by providing the requesting client a list that includes a subset of all communication peers possessing the requested piece of information, includes providing a network entity located such that it receives messages directed from the requesting client to the tracker, wherein the network entity stamps topological location information of the requesting client into any of the messages directed from the requesting client to the tracker. Furthermore, a corresponding system is disclosed.
    Type: Grant
    Filed: October 29, 2010
    Date of Patent: April 28, 2015
    Assignee: NEC Europe Ltd.
    Inventors: Sebastian Kiesel, Hans-Joerg Kolbe, Rolf Winter
  • Patent number: 9021536
    Abstract: A system and process is provided in which original video or dialog content is securely received from the content owner. Subtitle language data is derived, translated, stored and served on a separate database for synchronous playback with the content in video streaming, downloading or online TV following the activation of an option by the end user through a media player.
    Type: Grant
    Filed: September 5, 2013
    Date of Patent: April 28, 2015
    Assignee: Stream Translations, Ltd.
    Inventors: Richard E. Greenberg, Bente Cecilie Ottersen, Randi Næs
  • Publication number: 20150113091
    Abstract: In an example of masterless cache replication, a processor of a server of a plurality of servers hosting a distributed application can receive a local cache event for a local data item stored in an application cache of the server. The processor can determine whether the local cache event is from another server. The processor can also determine whether a remote cache event of the other server is different from the local cache event and whether the local cache event is in conflict with at least one other cache event for the local data item. The processor can also determine whether the local cache event has a higher priority over the at least one other cache event and direct performance of the local cache event amongst the plurality of servers.
    Type: Application
    Filed: October 23, 2013
    Publication date: April 23, 2015
    Applicant: Yahoo! Inc.
    Inventors: Amarjit Luwang Thiyam, Saurabh Singla
  • Publication number: 20150113092
    Abstract: An apparatus for accessing data in an enterprise data storage system. The apparatus includes memory for storing data, a storage controller, a secure hypervisor, and an interface. The storage controller is coupled to the memory and is configured for managing data stored in the memory. The controller is also configured to receive a command from a client device to access specified data in the memory. The secure virtualized hypervisor within the memory is configured for deploying an operating system of the storage controller for purposes of secure operation by the storage controller. The interface is configured for communicating with the storage controller and initiates the storage controller to perform the command on the specified data that is fetched into the secure virtualized hypervisor, wherein results of the command is transmitted over a network to the client device.
    Type: Application
    Filed: October 23, 2013
    Publication date: April 23, 2015
    Applicant: FUTUREWEI TECHNOLOGIES, INC.
    Inventors: Vineet CHADHA, Guangyu Shi
  • Publication number: 20150113090
    Abstract: In a method for determining a primary storage device and a secondary storage device for copies of data, one or more processors determine metrics data for at least two storage devices in a computing environment. The one or more processors adjust the metrics data. The one or more processors determine an I/O throughput value based on the adjusted metrics data for each of the at least two storage devices. The one or more processors compare the determined I/O throughput values for each of the at least two storage devices. The one or more processors select a storage device of the at least two storage devices with the lowest determined I/O throughput as a primary storage device.
    Type: Application
    Filed: October 23, 2013
    Publication date: April 23, 2015
    Applicant: International Business Machines Corporation
    Inventors: Steven F. Best, Janice M. Girouard, Robert E. Reiland, Yehuda Shiran
  • Patent number: 9015269
    Abstract: The present invention relates to the notification of a server device with the availability of resources in cache memories of a client device and to the serving of digital resources in such a client-server communication system. The notifying method comprises: obtaining a first list of resources available in the cache memories of the client device; filtering the first list according to filtering criteria relating to a resource parameter, to obtain a filtered list of fewer resources available in the client device or splitting the first list according to splitting criteria relating to a resource parameter, to obtain a plurality of sub-lists of resources available in the client device; and notifying the server device with data structures representing the filtered list or sub-lists of resources.
    Type: Grant
    Filed: June 19, 2012
    Date of Patent: April 21, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Herve Ruellan, Youenn Fablet, Romain Bellessort
  • Patent number: 9015416
    Abstract: Some embodiments provide systems and methods for validating cached content based on changes in the content instead of an expiration interval. One method involves caching content and a first checksum in response to a first request for that content. The caching produces a cached instance of the content representative of a form of the content at the time of caching. The first checksum identifies the cached instance. In response to receiving a second request for the content, the method submits a request for a second checksum representing a current instance of the content and a request for the current instance. Upon receiving the second checksum, the method serves the cached instance of the content when the first checksum matches the second checksum and serves the current instance of the content upon completion of the transfer of the current instance when the first checksum does not match the second checksum.
    Type: Grant
    Filed: July 21, 2014
    Date of Patent: April 21, 2015
    Assignee: Edgecast Networks, Inc.
    Inventor: Andrew Lientz
  • Patent number: 9008287
    Abstract: A method of establishing a communications session for communication of data with respect to at least two user devices in a data communications network. Call party details of a telephone call are received. The telephone call involves at least a first telephony user device and a second telephony user device. The call party details include a first identity associated with the first telephony user device and a second identity associated with the second telephony user device. At least one of the first and second identities comprises a telephone dialing number. A separate communications session is established on the basis of the first and second identities received in the call party details. The communications session is separate from the telephone call, for the communication of data to and/or from the at least two user devices.
    Type: Grant
    Filed: April 18, 2013
    Date of Patent: April 14, 2015
    Assignee: Metaswitch Networks Ltd
    Inventors: Chris Mairs, Liz Rice, Felix Palmer
  • Patent number: 9002970
    Abstract: Byte utilization is improved in Remote Direct Memory Access (RDMA) communications by detecting a plurality of concurrent messages on a plurality of application sockets which are destined for the same application, client or computer, intercepting those messages and consolidating their payloads into larger payloads, and then transmitting those consolidated messages to the destination, thereby increasing the payload-to-overhead byte utilization of the RDMA transmissions. At the receiving end, multiplexing information is used to unpack the consolidated messages, and to put the original payloads into a plurality of messages which are then fed into the receiving sockets to the destination application, client or computer, thereby making the consolidation process transparent between the initiator and the target.
    Type: Grant
    Filed: July 12, 2012
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Omar Cardona, Shaival Jagdishbhai Chokshi, Rakesh Sharma, Xiaohan Qin
  • Patent number: 9002939
    Abstract: A system for communicating information among a plurality of nodes of a network. The system comprises a plurality of disseminating modules installed in a plurality of nodes of a network which hosts a plurality of replicas of data having a plurality of objects, each the disseminating module has access to a dataset defining a plurality of write request dissemination topologies. Each disseminating module is defined to receive a write request from a client, to select dynamically one of the write request dissemination topologies according to at least one parameter of the client, and to disseminate the write request according to the selected write request dissemination topology.
    Type: Grant
    Filed: June 3, 2012
    Date of Patent: April 7, 2015
    Assignee: International Business Machines Corporation
    Inventors: Guy Laden, Roie Melamed
  • Publication number: 20150095445
    Abstract: Particular embodiments change a current storage I/O path used by a host computer to access networked storage to an alternative storage I/O path by considering traffic load at a networked switch in the current storage I/O path. The host computer transmits a request to the networked switch in the current storage I/O path to provide network load information currently experiences by the networked switch. After receiving network load information from the networked switch, the host computer then evaluates whether the networked switch is overloaded based on the received network load information. Based on the evaluation, the host computer selects a new alternative storage I/O path to the networked storage that does not include the networked switch, and then forwards future storage I/O communications to the networked storage using the new alternative storage I/O path.
    Type: Application
    Filed: September 30, 2013
    Publication date: April 2, 2015
    Applicant: VMware, Inc.
    Inventors: Sudhish Panamthanath THANKAPPAN, Jinto ANTONY
  • Publication number: 20150095446
    Abstract: System and method for increasing physical memory page sharing by workloads executing on different host computing systems are described. In one embodiment, workloads executing on different host computing systems that access physical memory pages having identical contents are identified. Further, migration to consolidate the identified workloads on a single host computing system such that the physical memory pages can be shared using a page sharing mechanism is recommended.
    Type: Application
    Filed: October 1, 2013
    Publication date: April 2, 2015
    Applicant: VMWARE, INC.
    Inventor: MANIKANDAN RAMASUBRAMANIAN
  • Publication number: 20150095447
    Abstract: A serving method of a cache server relates to the field of communications, and can reduce bandwidth consumption of an upstream network and alleviate network pressure. The method include: receiving first request information sent by multiple user equipments, where the first request information indicates data separately required by the multiple user equipments and request points for the data separately required; if it is determined that same data is indicated in the first request information sent by at least two user equipments among the multiple user equipments and the same data has not been cached in the cache server, selecting one request point from request points falling within a preset window; and sending second request information to a source server, where the second request information indicates the uncached data and the selected request point.
    Type: Application
    Filed: December 9, 2014
    Publication date: April 2, 2015
    Applicant: HUAWEI TECHNOLOGIES CO., LTD.
    Inventors: Wenxiao Yu, Jinhui Zhang, Youqing Yang
  • Patent number: 8990402
    Abstract: A method of providing a fast path message transfer agent is provided. The method includes receiving bytes of a message over a network connection and determining whether the number of bytes exceeds a predetermined threshold. If the number of bytes is less than a predetermined threshold, then the message is written only to memory. However, if the number of bytes exceeds the predetermined threshold, then some of the bytes (e.g. up to the predetermined threshold) are written to memory, wherein the remainder of the bytes are stored onto the non-volatile storage. If the message was received successfully by each destination, then the message is removed from the memory/non-volatile storage. If not, all failed destinations are identified and the message (with associated failed destinations) is stored on the non-volatile storage for later sending.
    Type: Grant
    Filed: February 3, 2009
    Date of Patent: March 24, 2015
    Assignee: Critical Path, Inc.
    Inventor: Bradley Taylor
  • Patent number: 8984229
    Abstract: A method and system for dynamic distributed data caching is presented. The system includes one or more peer members and a master member. The master member and the one or more peer members form cache community for data storage. The master member is operable to select one of the one or more peer members to become a new master member. The master member is operable to update a peer list for the cache community by removing itself from the peer list. The master member is operable to send a nominate master message and an updated peer list to a peer member selected by the master member to become the new master member.
    Type: Grant
    Filed: October 29, 2013
    Date of Patent: March 17, 2015
    Assignee: Parallel Networks, LLC
    Inventors: Keith A. Lowery, Bryan S. Chin, David A. Consolver, Gregg A. DeMasters
  • Publication number: 20150074221
    Abstract: The present invention relates to a Domain Name System (DNS) server and a method for resolving DNS queries from a number of clients. The DNS server comprises multiple virtual DNS server instances servicing different clients. The DNS server further comprises a shared cache for caching records which indicate answers to resolved DNS queries. The shared cache is shared between a set of virtual DNS server instances. The virtual DNS server instances that share the shared cache are able to cache DNS query results in the shared cache as well as resolve a DNS query by retrieving a cached record corresponding to the DNS query from the shared cache. Thus it is possible for a virtual DNS server instance to make use of DNS query results obtained by other virtual DNS server instances.
    Type: Application
    Filed: November 17, 2010
    Publication date: March 12, 2015
    Applicant: Telefonaktiebolaget LM Ericsson (publ)
    Inventors: Jan Melen, Tero Kauppinen, Jari Arkko, Heikki Mahkonen, Fredrik Garneij, Christian Gotare
  • Publication number: 20150074222
    Abstract: A method and apparatus is disclosed herein for load balancing and dynamic scaling for a storage system. In one embodiment, an apparatus comprises a load balancer to direct read requests for objects, received from one or more clients, to at least one of one or more cache nodes based on a global ranking of objects, where each cache node serves the object to a requesting client from its local storage in response to a cache hit or downloads the object from the persistent storage and serves the object to the requesting client in response to a cache miss, and a cache scaler communicably coupled to the load balancer to periodically adjust a number of cache nodes that are active in a cache tier based on performance statistics measured by one or more cache nodes in the cache tier.
    Type: Application
    Filed: August 29, 2014
    Publication date: March 12, 2015
    Inventors: Guanfeng Liang, Ulas C. Kozat, Chris Xiao Cai
  • Patent number: 8977717
    Abstract: An approach is provided for initiating sending a request message indicating a parameter for an application to an index of parameter values stored in a database for a plurality of related applications. A value for the parameter is received in response to sending the request. Performing a function of the application based on the value received for the parameter is initiated. The value for the parameter is used by a different mobile application of the plurality of related applications.
    Type: Grant
    Filed: June 17, 2009
    Date of Patent: March 10, 2015
    Assignee: Nokia Corporation
    Inventors: Ville Aarni, Miikka Sainio, Niklas Von Knorring, Dmitry Kolesnikov, Atte Lahtiranta
  • Patent number: 8972517
    Abstract: In general, methods and apparatus according to the invention mitigate these and other issues by implementing caching techniques described herein. So when one device in a home network downloads and plays a particular content (e.g., a video, song) from a given site, the content is cached within the network such that the same content is available to be re-played on another device without re-downloading the same content from the Internet.
    Type: Grant
    Filed: April 25, 2012
    Date of Patent: March 3, 2015
    Assignee: Ikanos Communications, Inc.
    Inventors: Jonathan J. Black, Pramod B. Kaluskar
  • Patent number: 8971196
    Abstract: Network traffic information from multiple sources, at multiple time scales, and at multiple levels of detail are integrated so that users may more easily identify relevant network information. The network monitoring system stores and manipulates low-level and higher-level network traffic data separately to enable efficient data collection and storage. Packet traffic data is collected, stored, and analyzed at multiple locations. The network monitoring locations communicate summary and aggregate data to central modules, which combine this data to provide an end-to-end description of network traffic at coarser time scales. The network monitoring system enables users to zoom in on high-level, coarse time scale network performance data to one or more lower levels of network performance data at finer time scales.
    Type: Grant
    Filed: March 8, 2012
    Date of Patent: March 3, 2015
    Assignee: Riverbed Technology, Inc.
    Inventors: Loris Degioanni, Steven McCanne, Christopher J. White, Dimitri S. Vlachos
  • Patent number: 8959285
    Abstract: A method of servicing a command sent from a host device file system (HDFS) within a host device (HD) by a local storage device (LSD) in communication with the HD is described. The method includes receiving a first command at the LSD instructing the LSD to execute an operation on associated logical addresses. If the first command is associated with at least a first set of logical addresses, the method includes servicing the first command by the LSD at least by way of sending a second command to a device (RD) external to the LSD that instructs the RD to execute an operation on memory locations within the RD. If the first command is not associated with the first set of logical addresses, the method includes servicing the first command by the LSD only by way of operations executed by the LSD on memory locations within the LSD.
    Type: Grant
    Filed: April 10, 2008
    Date of Patent: February 17, 2015
    Assignee: SanDisk Technologies Inc.
    Inventors: Alain Nochimowski, Alon Marcu, Micha Rave, Itzhak Pomerantz
  • Patent number: 8959173
    Abstract: A method, system and program product for enabling migration of Virtual Machines with concurrent access to data across two geographically disperse sites to enable load balancing across the two geographically disperse sites, by presenting over a network a read writable logical volume at a first site, presenting over a network a read writable logical volume at a second geographically disparate site; wherein the first volume and the second volume are configured to contain the same information, and enabling read write access to the volume at the first site or the volume at the second site for a first virtual machine while keeping the data consistent between the two sites to enable transparent migration of the virtual machine to load balancing across the two sites according to at least one load balancing metric.
    Type: Grant
    Filed: September 30, 2010
    Date of Patent: February 17, 2015
    Assignee: EMC Corporation
    Inventors: Gregory S. Robidoux, Balakrishnan Ganeshan, Yaron Dar, Kenneth J. Taylor, Txomin Barturen, Bradford B. Glade
  • Patent number: 8954575
    Abstract: Embodiments perform centralized input/output (I/O) path selection for hosts accessing storage devices in distributed resource sharing environments. The path selection accommodates loads along the paths through the fabric and at the storage devices. Topology changes may also be identified and automatically initiated. Some embodiments contemplate the hosts executing a plurality of virtual machines (VMs) accessing logical unit numbers (LUNs) in a storage area network (SAN).
    Type: Grant
    Filed: May 23, 2012
    Date of Patent: February 10, 2015
    Assignee: VMware, Inc.
    Inventors: Krishna Raj Raja, Ajay Gulati
  • Publication number: 20150039717
    Abstract: In accordance with one aspect of the present description, in response to a detection by a storage controller, of an operation by a host relating to migration of input/output operations from one host to another, a cache server of a storage controller, transmits to a target cache client of the target host, a cache map of the source cache of the source host wherein the cache map identifies locations of a portion of the storage cached in the source cache. In response, the cache client of the target host, may populate the target cache of the target host with data from the locations of the portion of the storage, as identified by the cache map transmitted by the cache server, which may reduce cache warming time. Other features or advantages may be realized in addition to or instead of those described herein, depending upon the particular application.
    Type: Application
    Filed: August 2, 2013
    Publication date: February 5, 2015
    Applicant: International Business Machines Corporation
    Inventors: Lawrence Y. Chiu, Hyojun Kim, Paul H. Muench, Sangeetha Seshadri
  • Publication number: 20150039716
    Abstract: Management of a networked storage system through a storage area network (SAN). The storage system includes a storage host, a server, and a management host. The storage host includes a plurality of storage devices. The server is configured to access the storage devices of the storage host via the SAN. The server is also configured to transmit attribute information via the SAN, where the attribute information describes at least one attribute of the server. The management host is configured to receive the attribute information and to determine a desired configuration change to the storage system based on the attribute information. The desired configuration change affects access by the server to the storage devices of the storage host via the SAN.
    Type: Application
    Filed: August 1, 2013
    Publication date: February 5, 2015
    Applicant: Coraid, Inc.
    Inventors: Robert J. Przykucki, JR., Samuel A. Hopkins
  • Patent number: 8949550
    Abstract: The present invention relates to a coarse-grained reconfigurable array, comprising: at least one processor; a processing element array including a plurality of processing elements, and a configuration cache where commands being executed by the processing elements are saved; and a plurality of memory units forming a one-to-one mapping with the processor and the processing element array. The coarse-grained reconfigurable array further comprises a central memory performing data communications between the processor and the processing element array by switching the one-to-one mapping such that when the processor transfers data from/to a main memory to/from a frame buffer, a significant bottleneck phenomenon that may occur due to the limited bandwidth and latency of a system bus can be improved.
    Type: Grant
    Filed: June 1, 2010
    Date of Patent: February 3, 2015
    Assignee: SNU R&DB Foundation
    Inventors: Ki Young Choi, Kyung Wook Chang, Jong Kyung Paek
  • Patent number: 8949312
    Abstract: An embodiment generally relates to a method of updating clients from a server. The method includes maintaining a master copy of a software on a server and capturing changes to the master copy of the software on an update disk image, where the changes are contained in at least one chunk. The method also includes merging the update disk image with one of two client disk images of the client copy of the software.
    Type: Grant
    Filed: May 25, 2006
    Date of Patent: February 3, 2015
    Assignee: Red Hat, Inc.
    Inventors: Mark McLoughlin, William Nottingham, Timothy Burke
  • Publication number: 20150032839
    Abstract: Systems and methods for managing storage entities in a storage network are provided. Embodiments may provide a group of management devices to manage a plurality of storage entities in the storage network. In some instances, a storage entity hierarchy for the plurality of storage entities may be identified. At least one of a load or a health associated with a management device of the group of management devices may, in embodiments, be determined. In some embodiments, the plurality of storage entities may be managed in accordance with the identified storage entity hierarchy and based, at least in part, on the determined at least one of a load or a health.
    Type: Application
    Filed: July 26, 2013
    Publication date: January 29, 2015
    Applicant: NetApp, Inc.
    Inventors: Sergey Serokurov, Stephanie He, Dennis Ramdass
  • Publication number: 20150026290
    Abstract: The present invention provides a method for managing cloud hard disks. A plurality of hard disk spaces are first logged in to a client unit. Thereby, when the client unit accesses at least a personal datum, the client unit can distribute according to the required hard disk space of the accessed datum without checking the plurality of hard disk spaces one by one. Accordingly, it becomes more convenient for users in using hard disk spaces.
    Type: Application
    Filed: July 14, 2014
    Publication date: January 22, 2015
    Inventors: JUN-HUI WU, YAN-JIUN LIN
  • Publication number: 20150019680
    Abstract: Systems and methods for consistent hashing using multiple hash rings. An example method may comprise: assigning two or more tokens to each node of a plurality of nodes, the two or more tokens belonging to two or more distinct cyclic sequences of tokens, wherein each node is assigned a token within each cyclic sequence; receiving a request comprising an attribute of an object; determining, based on the attribute, a sequence identifier and an object position, the sequence identifier identifying a sequence of the two or more cyclic sequences of tokens, the object position identifying a position of the object within the sequence; and identifying, based on the sequence identifier and the object position, a node for servicing the request.
    Type: Application
    Filed: July 15, 2013
    Publication date: January 15, 2015
    Inventor: Jeff Darcy
  • Publication number: 20150012609
    Abstract: An apparatus and method improving effective system throughput for replication of data over a network in a storage computing environment by using software components to perform data compression is disclosed. Software compression support is determined between applications in a data storage computing environment. If supported, compression parameters are negotiated for a communication session between storage systems over a network. Effective system throughput is improved since the size of a compressed lost data packet is less than the size of an uncompressed data packet when a lost packet needs to be retransmitted in a transmission window.
    Type: Application
    Filed: May 27, 2014
    Publication date: January 8, 2015
    Inventor: Vijay Singh
  • Patent number: 8930364
    Abstract: A storage controller is implemented for controlling a storage system. The storage controller may be implemented using a distributed computer system and may include components for servicing client data requests based on the characteristics of the distributed computer system, the client, or the data requests. The storage controller is scalable independently of the storage system it controls. All components of the storage controller, as well as the client, may be virtual or hardware-based instances of a distributed computer system.
    Type: Grant
    Filed: March 29, 2012
    Date of Patent: January 6, 2015
    Assignee: Amazon Technologies, Inc.
    Inventors: Marc J. Brooker, Madhuvanesh Parthasarathy, Tate Andrew Certain, Kerry Q. Lee
  • Publication number: 20150006486
    Abstract: Examples of the present disclosure disclose a method and an apparatus for storing data. In the method, an extendable two-dimensional data buffer array is configured in a local shared memory according to a preset configuration policy, the two-dimensional data buffer array comprising multiple logic data blocks, and each of the multiple logic data blocks comprising multiple sub data blocks for storing data. Data on a network storage device is stored into a sub data block corresponding to the data according to requirements of service logic. According to the examples of the present disclosure, the data is stored locally, thereby improving efficiency of data storage and data exchange. In addition, the extendibility of data structure is improved since the two-dimensional data buffer array for storing data is extendable, and thus the requirement of the service logic is satisfied.
    Type: Application
    Filed: September 5, 2014
    Publication date: January 1, 2015
    Inventors: Aimin Lin, Jiong Tang, Junshuai Wang