Multicomputer Data Transferring Via Shared Memory Patents (Class 709/213)
  • Patent number: 10318430
    Abstract: Embodiments relate to a system operation queue for a transaction. An aspect includes determining whether a system operation is part of an in-progress transaction of a central processing unit (CPU). Another aspect includes based on determining that the system operation is part of the in-progress transaction, storing the system operation in a system operation queue corresponding to the in-progress transaction. Yet another aspect includes, based on the in-progress transaction ending, processing the system operation in the system operation queue.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: June 11, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jonathan D. Bradbury, Michael K. Gschwind, Eric M. Schwarz
  • Patent number: 10318343
    Abstract: The present application discloses a virtual machine migration method and apparatus. A specific implementation of the method includes: receiving a migration request for migrating a virtual machine, wherein to-be-migrated data of the virtual machine comprises local data locally stored and shared data accessible by the virtual machine at a plurality of locations; determining migration operations respectively corresponding to the local data and the shared data in response to the migration request; and executing the migration operations corresponding to the local data and the shared data, to complete migration of the virtual machine. This implementation achieves the migration of a virtual machine with a hybrid storage mode, that is, a storage mode in which the data to be migrated includes both local data and shared data.
    Type: Grant
    Filed: November 27, 2015
    Date of Patent: June 11, 2019
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Yu Zhang, Zhen Xu, Feifei Cao, Guangjun Xie
  • Patent number: 10320931
    Abstract: The present application discloses a method for caching data, which is capable of determining a caching location of content according to a popularity of the content and impact of caching the content on network bandwidth, so as to save the network bandwidth. The method includes: receiving a first data packet from a server, and a caching gain included in the first data packet is a maximum value of local caching gains of all forwarding devices on a first content delivery path; and caching first content included in the first data packet when determining that the caching gain in the first data packet matches a local caching gain corresponding to an identity of the first content, where the local caching gain corresponding to the identity of the first content is generated by calculation according to a first parameter and a popularity of the first content.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: June 11, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventor: Shucheng Liu
  • Patent number: 10320680
    Abstract: Network devices, such as load balancers may be configured to route requests to hosts that are responding in a shorter period of time than other hosts. Sometimes hosts respond in shorter periods of time due to errors (they short-circuit). Such behavior may cause a spike in failed requests and increase the impact of a host malfunction. Disclosed is an enhanced load balancing algorithm that reduces request loads to hosts that are responding to request more quickly than expected or historically observed. A load balancer tracks the hosts' performance. Upon detecting response times shorter than expected from a host, the load balancer will reduce the load on the host. The request routing will go back to normal distribution after the host behaving according to its known performance profile.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: June 11, 2019
    Assignee: Amazon Technologies, Inc.
    Inventor: Nima Sharifi Mehr
  • Patent number: 10311028
    Abstract: Implementations of the present disclosure involve a system and/or method for replication size estimation and progress monitoring for a file system residing on a computing system. The replication progress monitoring system obtains a first snapshot of a file system for a first point in time and a second snapshot of the file system for a second point in time. The system may then calculate the difference between the first snapshot size from the second snapshot size and add to the difference the size a released data size. The released data size includes the size of any blocks of data included in the first snapshot and released before the second snapshot was taken. The replication transfer size may then be estimated by adding the snapshot size difference with the released size estimate.
    Type: Grant
    Filed: May 16, 2013
    Date of Patent: June 4, 2019
    Assignee: Oracle International Corporation
    Inventors: Richard J. Morris, Waikwan Hui, Mark Maybee
  • Patent number: 10304506
    Abstract: Systems, apparatuses, and methods for implementing dynamic clock control to increase stutter efficiency in a memory subsystem are disclosed. A system includes at least a processor, a memory, and a communication fabric coupled to the processor and memory. The system implements a stutter mode for a first region of the fabric, with stutter mode including an idle state and an active state. Stutter efficiency is defined as the idle time divided by the sum of the active time and the idle time. Reducing the exit latency of going from the idle state to the active state increases the stutter efficiency which increases the power savings achieved by implementing the stutter mode. Since the phase-locked loop (PLL) is one of the main contributors to the exit latency, the PLL is powered down and one or more bypass clocks are provided during the stutter mode.
    Type: Grant
    Filed: November 10, 2017
    Date of Patent: May 28, 2019
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Alexander J. Branover, Benjamin Tsien, Bradley Kent, Joyce C. Wong
  • Patent number: 10305813
    Abstract: Generally, this disclosure provides systems, methods and computer readable media for management of sockets and device queues for reduced latency packet processing. The method may include maintaining a unique-list comprising entries identifying device queues and an associated unique socket for each of the device queues, the unique socket selected from a plurality of sockets configured to receive packets; busy-polling the device queues on the unique-list; receiving a packet from one of the plurality of sockets; and updating the unique-list in response to detecting that the received packet was provided by an interrupt processing module. The updating may include identifying a device queue associated with the received packet; identifying a socket associated with the received packet; and if the identified device queue is not on one of the entries on the unique-list, creating a new entry on the unique-list, the new entry comprising the identified device queue and the identified socket.
    Type: Grant
    Filed: January 6, 2017
    Date of Patent: May 28, 2019
    Assignee: Intel Corporation
    Inventors: Eliezer Tamir, Eliel Louzoun, Matthew R. Wilcox
  • Patent number: 10305982
    Abstract: A method begins by identifying, when a first migration of a plurality of sets of encoded data slices from an original source storage set to an intermediate destination storage set is active, a new destination storage set of a second migration for the plurality of sets of encoded data slices. The method continues by issuing migration requests to storage sets associated with current storage of the plurality of sets of encoded data slices in accordance with a first cursor identifying a particular DSN storage address of a corresponding set of encoded data slices and, when the second migration is active, facilitating processing of a data access request to produce a data access response utilizing the first cursor of the first migration and a second cursor identifying a particular DSN address of a corresponding set of encoded data slices that is next up for migration.
    Type: Grant
    Filed: January 16, 2017
    Date of Patent: May 28, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Andrew D. Baptist, Manish Motwani
  • Patent number: 10306007
    Abstract: Embodiments of the present invention include a cache content hit method to ensure that a charging and interception apparatus performs interception and charging on content accessed by a user. The method in the embodiments of the present invention includes: receiving a content request message that carries a content identifier; searching local cache content for content corresponding to the content identifier; and when it is determined that the content corresponding to the content identifier is stored, sending the content to the access network device; forwarding, by the edge cache apparatus to a core cache apparatus after adding a content hit identifier to the content request message, a content request message to which the content hit identifier is added to making the cache control apparatus discarding the content in the response message according to the content hit identifier.
    Type: Grant
    Filed: February 10, 2017
    Date of Patent: May 28, 2019
    Assignee: Huawei Software Technologies, Co., Ltd.
    Inventors: Shaolin Zhang, Xiongjun Shi
  • Patent number: 10296599
    Abstract: A method, system and computer program product for sharing resources among remote repositories. In a shared file system, a resource identifier and metadata are created for a resource, where the resource identifier is stored in a lock file in a shared volume accessible by the remote repositories. The lock file is then released in response to distributing the associated resource to the remote repositories. Alternatively, in a peer-to-peer system, a request is received to create, read, update or delete a resource stored in a content repository. A resource name, a resource version and/or a resource fingerprint are received in connection with the request to create, read, update or delete the resource in the content repository. A determination is then made as to whether the received resource name, resource version and/or resource fingerprint matches the respective resource name, resource version and/or resource fingerprint stored in a node graph for the resource.
    Type: Grant
    Filed: April 11, 2015
    Date of Patent: May 21, 2019
    Assignee: International Business Machines Corporation
    Inventors: Barry P. Gower, Larry R. Hamann, Andrew S. Myers, Seth R. Peterson, Davanum M. Srinivas, Donald R. Woods
  • Patent number: 10298688
    Abstract: Disclosed are a cloud storage managing system, a cloud storage managing method, and an apparatus for same. To achieve the objective according to the present invention, the cloud storage managing apparatus according to the present invention comprises: a content alignment unit for aligning and shifting content recorded on a cloud storage, by transmitting a shift signal to the cloud storage; and a broker application programming interface (API) providing unit for abstracting, into a broker API, a storage API which corresponds to the type of the cloud storage by using an API mapping table, and providing the content to a terminal device by using the broker API.
    Type: Grant
    Filed: December 26, 2013
    Date of Patent: May 21, 2019
    Assignee: SK TECHX CO., LTD.
    Inventor: Seung-Won Na
  • Patent number: 10289306
    Abstract: A data storage system has multi-core processing circuitry and processes data movement requests using a multi-threaded library component having an initial operation of invoking an underlying driver to read data, and subsequent operations of copying data, invoking an underlying driver to write data, and initiating additional data movement operations as necessary to complete data movement for an entire range of the data movement request. Core-affined threads are used to execute library component operations for data movement requests of associated per-core queues. Data movement requests are distributed among the per-core queues for parallel processing of the data movement requests by the respective core-affined threads, and the execution of a core-affined thread includes initially starting the thread on the affined core to perform the initial operation, and subsequently re-starting the thread on the affined core to perform each of the subsequent operations.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: May 14, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Changyu Feng, Henry Austin Spang, IV, Jian Gao, Xinlei Xu, Lifeng Yang
  • Patent number: 10289438
    Abstract: One embodiment is a method and includes monitoring by a module associated with a first application component installed on a first virtual machine (“VM”) a state of at least one second application component installed on a second VM and on which a state of the first application component is at least partially dependent, in which the state of the at least one second application component is made available by a module associated with the at least one application component; determining that the state of the at least one second application component has changed from a first state to a second state; and updating the state of the first application component based on a current state of the first application component and the second state of the at least one second application component.
    Type: Grant
    Filed: June 16, 2016
    Date of Patent: May 14, 2019
    Assignee: CISCO TECHNOLOGY, INC.
    Inventors: Dwight Rodney Frye, Jr., Michael J. Lemen, Vinod Jagannath Damle
  • Patent number: 10282764
    Abstract: Organizing data in a cloud computing environment having a plurality of computing nodes is described. An authorization to service a request is received. The request may be from a user for launching an instance. In response to receiving the authorization and based on the request, an image list is determined. The image list includes information corresponding to a plurality of machine images. At least one machine image is identified from the image list associated with a functional requirement of the request. The instance is launched at the at least one computing node. The at least one machine image is updated after the instance has been launched.
    Type: Grant
    Filed: August 31, 2017
    Date of Patent: May 7, 2019
    Assignee: Oracle International Corporation
    Inventors: Willem Robert Van Biljon, Christopher Conway Pinkham, Russell Andrew Cloran, Michael Carl Gorven, Alexandre Hardy, Brynmor K. B. Divey, Quinton Robin Hoole, Girish Kalele
  • Patent number: 10284732
    Abstract: Methods and devices for masking latency may include detecting a pause in receiving an image stream from an imaging device and generating one or more virtual image frames, each including a status indicator to indicate a status of the imaging device when the pause in receiving the image stream is detected. The methods and devices may also include generating, at the operating system, a data stream with the one or more virtual image frames inserted after a last image frame of the received image stream. In addition, the methods and devices may include transmitting the data stream to an application.
    Type: Grant
    Filed: November 30, 2016
    Date of Patent: May 7, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Naveen Thumpudi, Louis-Philippe Bourret
  • Patent number: 10277547
    Abstract: Data communications may be carried out in a distributed computing environment that includes a plurality of computers coupled for data communications through communications adapters and an active messaging interface (‘AMI’). In distributed computing environment, data communications may include: receiving in the AMI from an application an eager SEND instruction that describes the location and size of send data in an application SEND buffer; copying by the AMI the send data from the application SEND buffer to a temporary AMI buffer; advising the application of completion of the SEND instruction before sending the SEND data to the receiver; and after advising the application of completion of the SEND instruction, sending the SEND data by the sender to the receiver.
    Type: Grant
    Filed: August 27, 2013
    Date of Patent: April 30, 2019
    Assignee: International Business Machines Corporation
    Inventors: Charles J. Archer, Michael A. Blocksome, James E. Carey, Philip J. Sanders
  • Patent number: 10277707
    Abstract: Apparatuses, methods and storage medium associated with a memcached system are disclosed herewith. In embodiments, a client device of the memcached system may include memory and one or more processors coupled with the memory. Further, the client device may include memcached logic configured to receive a request to Get or Set a value corresponding to a key in the memcached system, determine, in response to the receive, whether the key results in a hit in a local cache maintained in memory by the memcached logic, and service the Get or Set request based at least in part on whether a result of the determine indicates the key results in a hit in the local cache. In embodiments, a server of the memcached system may include complement memcached logic to server a Get, Set or an Update request. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: June 26, 2014
    Date of Patent: April 30, 2019
    Assignee: Intel Corporation
    Inventors: Xiangbin Wu, Shunyu Zhu, Yingzhe Shen, Tin-Fook Ngai
  • Patent number: 10275541
    Abstract: The present disclosure includes apparatuses and methods for proactive corrective actions in memory based on a probabilistic data structure. A number of embodiments include a memory, and circuitry configured to input information associated with a subset of data stored in the memory into a probabilistic data structure and proactively determine, at least partially using the probabilistic data structure, whether to take a corrective action on the subset of data stored in the memory.
    Type: Grant
    Filed: August 5, 2016
    Date of Patent: April 30, 2019
    Assignee: Micron Technology, Inc.
    Inventors: Saeed Sharifi Tehrani, Sivagnanam Parthasarathy
  • Patent number: 10275184
    Abstract: Techniques are described herein for executing queries on distinct portions of a database object that has been separate into chunks and distributed across the volatile memories of a plurality of nodes in a clustered database system. The techniques involve receiving a query that requires work to be performed on data that resides in a plurality of on disk extents. A parallel query coordinator that is aware of the in-memory distribution divides the work into granules that align with the in-memory separation. The parallel query coordinator then sends each granule to the database server instance with local in memory access to the data required by the granule and aggregates the results to respond to the query.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: April 30, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Niloy Mukherjee, Vineet Marwah, Hui Jin, Kartik Kulkarni
  • Patent number: 10270903
    Abstract: To handle a failover condition, a media server receives a request, from a first application server, to stream a first media message in a media channel of a communication session. The first media message is streamed in the media channel of the communication session by the media server. Once the first media message has ended, a status message can be sent to the first application server to determine if the first application server has failed. If a response to the status message is not received (i.e., because the first application server has failed), the media server can stream a second media message during a period where a second application server is failing over for the first application server. If a response to the status message is received, the second media message is not streamed.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: April 23, 2019
    Assignee: Avaya Inc.
    Inventor: Joel Ezell
  • Patent number: 10268419
    Abstract: A hierarchy of multiple levels of storage resources and associated QOS (quality of service) limits and buckets of tokens may be specified. A different QOS limit may be applied to each individual storage resource. The buckets may denote current amounts of tokens available for consumption in connection with servicing I/O operations. Each bucket may denote a current amount of available tokens for a corresponding storage resource of included in the hierarchy. Processing may include receiving a first I/O operation directed to a first storage resource, and determining, in accordance with the buckets of available tokens, whether to service the first I/O operation.
    Type: Grant
    Filed: April 27, 2017
    Date of Patent: April 23, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Dmitry Nikolayevich Tylik, Kenneth Hu, Qi Jin, William Whitney, Karl M. Owen
  • Patent number: 10270875
    Abstract: A technology is described for managing dynamic groups of devices using device representations. An example method may include receiving a request for a dynamic group of device representations. In response to the request, a membership parameter used to identify member device representations included in the dynamic group of device representations may be obtained. Device representations may be queried using the membership parameter to identify member device representations that have a state that corresponds to the membership parameter, and the dynamic group of device representations may be generated to include identifiers for the member device representations.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: April 23, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Calvin Yue-Ren Kuo, Mark Edward Rafn, James Christopher Sorenson, III, Shyam Krishnamoorthy, Jonathan I. Turow, William Alexander Stevenson
  • Patent number: 10270738
    Abstract: A technology is described for operating a device shadowing service that calculates an aggregated group state for a group of device representations. An example method may include receiving device states for devices represented using a group of device representations, where the devices connect over a network to a device shadowing service configured to manage the device states. In response to an event, device representations included in the group of device representations may be identified. Device states indicated by the device representations may be obtained and an aggregated group state for the group of device representations may be calculated using the device states indicated by the device representations.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: April 23, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Calvin Yue-Ren Kuo, William Alexander Stevenson, James Christopher Sorenson, III, Shyam Krishnamoorthy, Jonathan I. Turow, Mark Edward Rafn
  • Patent number: 10271073
    Abstract: A system that incorporates teachings of the present disclosure may include, for example, a server comprising a memory to store executable instructions and a controller coupled to the memory. The controller, responsive to executing the instructions, performs operations comprising presenting a graphical user interface enabling selectable advertisements or a selectable channel distribution service for delivery to a set top box, presenting filters for targeted delivery of advertisements to subscriber equipment based on information descriptive of the subscriber, selecting advertisements from an advertising server based on detected selections of advertisements or a detected channel distribution service preference, transmitting a globally unique identifier of the set top box to a billing server, initiating storage of the selectable advertisements at a remote advertisement delivery server, and presenting the selectable advertisements to the set top box. Other embodiments are disclosed.
    Type: Grant
    Filed: August 19, 2016
    Date of Patent: April 23, 2019
    Assignee: AT&T Intellectual Property I, L.P.
    Inventors: Dean Debois, Frank Coppa, James D. Huffman
  • Patent number: 10261900
    Abstract: A transmission device (300) comprising a data cache system is provided with a data acquisition part (315) for acquiring volume data indicating the volume of transactionable products or services transmitted from a server. The transmission device (300) is provided with a saving part (320) for saving the acquired volume data in an information memory (390) as cache data, and a request acquisition part (330) for acquiring requests seeking volume output. The transmission device (300) is provided with a determination part (350) for determining whether or not the cache data is valid based on the elapsed time from when the cache data was received or saved and the volume indicated by the cache data, when a request is acquired. When the determination is that the cache data is invalid, the data acquisition part (315) receives new volume data from the server, and the saving part (320) saves the new volume data as new cache data in the information memory (390).
    Type: Grant
    Filed: March 29, 2013
    Date of Patent: April 16, 2019
    Assignee: Rakuten, Inc.
    Inventor: SeungHee Lee
  • Patent number: 10262151
    Abstract: A data storage service is provided in the cloud agnostically to any user account or identity through the use of data storage containers, which are accessed using unique identifiers and independently of any user-based context. The data storage service runs in a backend system and creates a data storage container along with a unique ID that identifies the data storage container from among multiple such containers. Once a cloud-based application receives the unique ID, the cloud-based application may itself assign the data storage container to any user, or to no user, in accordance with the cloud-based application's own programming.
    Type: Grant
    Filed: March 11, 2015
    Date of Patent: April 16, 2019
    Assignee: Citrix Systems, Inc.
    Inventors: Steven Dale McFerrin, Gustavo Teixeira Pinto, Philip John Wiebe
  • Patent number: 10257873
    Abstract: A method and electronic device for providing a tethering service is provided. The electronic device of the present disclosure includes a communication interface comprising communication circuitry and a processor configured to establish a direct connection with at least one external electronic device located in operable proximity of the electronic device using the communication interface, to check a predetermined input, to establish a session for connecting the at least one external device to at least one communication network via the electronic device based on the predetermined input, and to connect the at least one external electronic device to the at least one communication network via the electronic device during at least part of the direct connection session.
    Type: Grant
    Filed: January 20, 2017
    Date of Patent: April 9, 2019
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Buseop Jung, Vimal Bastin Edwin Joseph, Hyuk Kang, Inji Jin
  • Patent number: 10248996
    Abstract: A mobile end-user device includes a secure mobile payment agent. The mobile device indicates to network transaction servers, operated by various third parties, that it has the mobile payment agent capability. A user operating a not-necessarily-secure device application can indicate to a network transaction server the desire to make a purchase. The network transaction server opens a secure connection to the mobile payment agent, which verifies the transaction server as authorized to use mobile payments. The transaction server can then request that the mobile payment agent complete the desired purchase by having the user perform a confirmation action, after which the agent indicates a completed purchase transaction to the server. The mobile payment agent can communicate with a billing server to provide various aspects of the mobile payment capability.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: April 2, 2019
    Assignee: Headwater Research LLC
    Inventor: Gregory G. Raleigh
  • Patent number: 10243925
    Abstract: A cloud-based firewall system and service is provided to protect customer sites from attacks, leakage of confidential information, and other security threats. In various embodiments, such a firewall system and service can be implemented in conjunction with a content delivery network (CDN) having a plurality of distributed content servers. The CDN servers receive requests for content identified by the customer for delivery via the CDN. The CDN servers include firewalls that examine those requests and take action against security threats, so as to prevent them from reaching the customer site. The CDN provider implements the firewall system as a managed firewall service, with the operation of the firewalls for given customer content being defined by that customer, independently of other customers. In some embodiments, a customer may define different firewall configurations for different categories of that customer's content identified for delivery via the CDN.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: March 26, 2019
    Assignee: Akamai Technologies, Inc.
    Inventors: John A. Dilley, Prasanna Laghate, John F. Summers, Thomas Devanneaux
  • Patent number: 10237127
    Abstract: A method includes (a) receiving from several different storage systems, several discovery packets, each different type of system having a different minimal amount of configuration information required to initialize, (b) displaying an identifier for each system from which a discovery packet was received, (c) receiving a selection of a particular system, (d) determining which type of system the selected system is, each type being associated with a distinct set of initialization parameter selection choices reflecting the minimal amount of configuration information required to initialize a system of that type, (e) displaying the initialization parameter selection choices associated with the determined type of system, (f) receiving initialization parameter selections responsive to the displayed initialization parameter selection choices, and (g) sending an initialization command to the selected particular system including the received initialization parameter selections to allow the selected particular system to b
    Type: Grant
    Filed: September 28, 2012
    Date of Patent: March 19, 2019
    Assignee: EMC IP Holding Company LLC
    Inventors: Seth B. Horan, Kendra Marchant, Domenic Joseph LaRosa, Michael Richard Turco, Nidhish Chandran Ramachandran
  • Patent number: 10235223
    Abstract: Disclosed are various embodiments for a high-performance computing framework for cloud computing environments. A parallel computing application executable by at least one computing device of the cloud computing environment can call a message passing interface (MPI) to cause a first one of a plurality of virtual machines (VMs) of a cloud computing environment to store a message in a queue storage of the cloud computing environment, wherein a second one of the plurality of virtual machines (VMs) is configured to poll the queue storage of the cloud computing environment to access the message and perform a processing of data associated with the message. The parallel computing application can call the message passing interface (MPI) to access a result of the processing of the data from the queue storage, the result of the processing being placed in the queue storage by the second one of the plurality of virtual machines (VMs).
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: March 19, 2019
    Assignee: Georgia State University Research Foundation, Inc.
    Inventors: Sushil K. Prasad, Sara Karamati, Dinesh Agarwal
  • Patent number: 10223213
    Abstract: A method for execution by a dispersed storage network (DSN), the method begins by injecting generated data into a data segment to produce mixed data, partitioning the mixed data to produce first and second data partitions, performing a deterministic function on the first data partition to produce a first key, encrypting the second data partition using the first key to produce an encrypted second data partition, performing the deterministic function on the encrypted second data partition to produce a second key, encrypting the first data partition using the second key to produce an encrypted first data partition, performing the deterministic function on the encrypted first data partition to produce a third key, encrypting the encrypted second data partition to produce a re-encrypted second data partition, aggregating the encrypted first data partition and the re-encrypted second data partition to produce a secure package, and encoding the secure package and storing.
    Type: Grant
    Filed: November 6, 2017
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Jason K. Resch
  • Patent number: 10225361
    Abstract: A caching management method includes embedding a notification request tag in a dummy file, uploading the dummy file to a cache server, recording a timestamp indicating a first point in time that the dummy file is uploaded to the cache server, receiving an eviction notification indicating a second point in time that the dummy file is evicted from the cache server, and calculating an eviction time indicating an amount of time taken for the dummy file to be evicted from the cache server. Transmission of the eviction notification is triggered in response to processing the notification request tag, and the dummy file is not retrieved from the cache server between the first point in time and the second point in time. The eviction time is equal to a difference between the first point in time and the second point in time.
    Type: Grant
    Filed: June 29, 2016
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Michel Hack, Yufei Ren, Yandong Wang, Li Zhang
  • Patent number: 10223030
    Abstract: A computer-implemented method includes writing, by a producer, data to one or more buffers. The one or more buffers include a plurality of cells and together form a circular buffer, and an input cursor indicates which cell of the plurality of cells the producer writes to. The method further includes reading, by a consumer, data from the one or more buffers, where an output cursor indicates which cell of the plurality of cells the consumer reads from. It is detected that the consumer is overrun by the producer. A throughput of the consumer is compared to a throughput of the producer, responsive to detecting that the consumer is overrun by the producer. The output cursor is synchronized to a new position, by a computer processor, where the new position is selected based on comparing the throughput of the consumer to the throughput of the producer.
    Type: Grant
    Filed: April 4, 2018
    Date of Patent: March 5, 2019
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Anthony T. Sofia
  • Patent number: 10223431
    Abstract: Techniques for facilitating and accelerating log data processing by splitting data streams are disclosed herein. The front-end clusters generate large amount of log data in real time and transfer the log data to an aggregating cluster. The aggregating cluster is designed to aggregate incoming log data streams from different front-end servers and clusters. The aggregating cluster further splits the log data into a plurality of data streams so that the data streams are sent to a receiving application in parallel. In one embodiment, the log data are randomly split to ensure the log data are evenly distributed in the split data streams. In another embodiment, the application that receives the split data streams determines how to split the log data.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: March 5, 2019
    Assignee: Facebook, Inc.
    Inventors: Samuel Rash, Dhruba Borthakur, Zheng Shao, Eric Hwang
  • Patent number: 10225348
    Abstract: A first request is received from a first processing node to produce data blocks of a first data stream representing a first communication topic. The first processing node is one of the processing nodes handling a specific function of operating an autonomous vehicle. Each of the processing nodes is executed within a specific node container having a specific operating environment. A global memory segment is allocated from a global memory to store the data blocks of the first data stream. A first local memory segment is mapped to the global memory segment. The first local memory segment is allocated from a first local memory of a first node container containing the first processing node. The first processing node directly accesses the data blocks of the first data stream stored in the global memory segment by accessing the mapped first local memory segment within the first node container.
    Type: Grant
    Filed: July 21, 2016
    Date of Patent: March 5, 2019
    Assignee: BAIDU USA LLC
    Inventors: Quan Wang, Liming Xia, Jingchao Feng, Ning Qu, James Peng
  • Patent number: 10218585
    Abstract: Systems and methods for performing discovery of hosts to be employed for hosting containerized applications. An example method may comprise: transmitting, to a host management service employed to manage at least one of: a plurality of host computer systems or a plurality of virtual machines running on one or more host computer systems, a host discovery request comprising a host definition rule (e.g., defining an amount of available memory, a networking configuration parameter, a storage configuration parameter, or a processor type identifier); receiving, from the host management service, an identifier of a host that satisfies the host definition rule; and providing the identifier of the host to a container orchestration service employed to instantiate and run, on one or more hosts, a plurality of containerized applications.
    Type: Grant
    Filed: February 19, 2015
    Date of Patent: February 26, 2019
    Assignee: Red Hat, Inc.
    Inventor: Federico Simoncelli
  • Patent number: 10219118
    Abstract: The disclosure relates to a transmission node and a method for wirelessly providing a number of data packets to a plurality of receivers in a cell of a transmission node of a cellular telecommunications system. The method comprises the steps of storing a number of network coded data packets at the transmission node and cyclically transmitting the stored network coded data packets from the transmission node to the plurality of receivers. The number of transmitted network coded data packets in a cycle is at least equal to the number of data packets to be provided to each receiver of the plurality of receivers and each network coded data packet is a linear combination of two or more data packets to be provided to each receiver.
    Type: Grant
    Filed: December 14, 2016
    Date of Patent: February 26, 2019
    Assignees: KONINKLIJKE KPN N.V., NEDERLANDSE ORGANISATE VOOR TOEGEPAST-NATUURWETENSCHAPPELIJK ONDERZOEK TNO, UNIVERSITY OF TWENTE
    Inventors: Jasper Goseling, Ljupco Jorguseski
  • Patent number: 10209891
    Abstract: Techniques for improving flash memory flushing are disclosed. In some embodiments, the techniques may be realized as a method for improving flash memory flushing including receiving a request to write to flash memory, writing data associated with the request to the flash memory, identifying a pointer to a region bitmap corresponding to a write region for the write request, marking a bit of the region bitmap corresponding to the request as dirty, and updating the pointer, using a pointer management component, to the region bitmap to contain a dirty block count.
    Type: Grant
    Filed: August 24, 2015
    Date of Patent: February 19, 2019
    Assignee: WESTERN DIGITAL TECHNOLOGIES, INC.
    Inventor: Daniel Peter Noé
  • Patent number: 10209906
    Abstract: A technique includes receiving a command in a target port, where the command is provided by an initiator and is associated with a write operation. The technique includes, in response to the command, using the target to process a data transfer for the initiator associated with the write operation. The processing includes, based on a characteristic of the command, selectively using memory for the transfer pre-allocated by a storage array controller prior to receipt of the command by the target port or requesting an allocation of memory for the transfer from the storage array controller.
    Type: Grant
    Filed: October 31, 2013
    Date of Patent: February 19, 2019
    Assignee: Hewlett Packard Enterprises Development LP
    Inventors: Roopesh Kumar Tamma, Siamak Nazari, Ajitkumar A. Natarajan
  • Patent number: 10210018
    Abstract: A method and apparatus for optimizing quiescence in a transactional memory system is herein described. Non-ordering transactions, such as read-only transactions, transactions that do not access non-transactional data, and write-buffering hardware transactions, are identified. Quiescence in weak atomicity software transactional memory (STM) systems is optimized through selective application of quiescence. As a result, transactions may be decoupled from dependency on quiescing/waiting on previous non-ordering transaction to increase parallelization and reduce inefficiency based on serialization of transactions.
    Type: Grant
    Filed: December 24, 2008
    Date of Patent: February 19, 2019
    Assignee: Intel Corporation
    Inventors: Tatiana Shpeisman, Ali-Reza Adl-Tabatabai, Vijay Menon
  • Patent number: 10209690
    Abstract: A computer system is communicably coupled to one or more sensor devices. The computer system obtains a database of stored acoustic signatures characterizing predefined acoustic signals generated by passive tags in response to physical motion of the passive tags. The passive tags are associated with non-provisioned devices, and the acoustic signatures are associated with sets of executable instructions for provisioning the non-provisioned devices. A first acoustic signal characterized by a respective acoustic signature and generated by a first passive tag is detected. In response, and based on the respective acoustic signature and information in the database, a first non-provisioned device associated with the respective acoustic signature is identified, and a first set of executable instructions for provisioning the first non-provisioned device is identified.
    Type: Grant
    Filed: January 15, 2016
    Date of Patent: February 19, 2019
    Assignee: GOOGLE LLC
    Inventors: Harry Tannenbaum, Benjamin Irvine, Shayan Sayadi, James Vanhook Singer
  • Patent number: 10212203
    Abstract: Stream-based data deduplication is provided in a multi-tenant shared infrastructure but without requiring “paired” endpoints having synchronized data dictionaries. Data objects processed by the dedupe functionality are treated as objects that can be fetched as needed. As such, a decoding peer does not need to maintain a symmetric library for the origin. Rather, if the peer does not have the chunks in cache that it needs, it follows a conventional content delivery network procedure to retrieve them. In this way, if dictionaries between pairs of sending and receiving peers are out-of-sync, relevant sections are then re-synchronized on-demand. The approach does not require that libraries maintained at a particular pair of sender and receiving peers are the same. Rather, the technique enables a peer, in effect, to “backfill” its dictionary on-the-fly. On-the-wire compression techniques are provided to reduce the amount of data transmitted between the peers.
    Type: Grant
    Filed: September 19, 2016
    Date of Patent: February 19, 2019
    Assignee: Akamai Technologies, Inc.
    Inventors: Charles E. Gero, Andrew F. Champagne, F. Thomson Leighton
  • Patent number: 10212045
    Abstract: The current document is directed to methods and systems for testing and analyzing the operational characteristics of management servers that manage multiple host systems in distributed computing systems on which virtual data centers and other types of virtual infrastructure are implemented. Management servers are generally designed to manage host systems that include particular types of virtualization layers, referred to as “native host systems.” In a described implementation, a management server is connected to a host-gateway appliance that includes host-gateway control logic implemented within a server computer. The host-gateway appliance allows a management server to interface to the management interfaces of non-native host systems that include visualization layers to which the management server is not designed to interface.
    Type: Grant
    Filed: June 30, 2015
    Date of Patent: February 19, 2019
    Assignee: VMware, Inc.
    Inventors: Ivaylo Petkov Strandzhev, Danail Grigorov, Asen Alexandrov, Ilko Dragoev
  • Patent number: 10205989
    Abstract: The present technology is for optimizing storage on a computing device. A media application on the computing device can allocate a minimum amount of storage on the computing device. The media application can further be configured to automatically download and store media items added to a media library of an account associated with the computing device. The combination of these features can put strain on computing devices with limited amounts of storage. Accordingly, the present technology can automatically delete media items in cache to allow media items to be automatically downloaded, or allow other uses of storage by other applications on the computing device, while also preserving the minimum amount of storage of media items on the computing device.
    Type: Grant
    Filed: September 22, 2016
    Date of Patent: February 12, 2019
    Assignee: Apple Inc.
    Inventors: Thomas Alsina, Cody D. Jorgensen, Edward T. Schmidt, James H. Callender, Matthew J. Cielak, Taylor G. Carrigan
  • Patent number: 10204048
    Abstract: Replicating a primary application cache that serves a primary application on one network node into a secondary application cache that serves a secondary application on a second network node. Cache portions that are within the primary application cache are identified, and then identifiers (but not the cache portions) are transferred to the second network node. Once these identifiers are received, the cache portions that they identify may then be retrieved into the secondary application caches. This process may be repeatedly performed such that the secondary application cache moves towards the same state as the primary application cache though the state of the primary application cache also changes as the primary application operates by receiving read and write requests.
    Type: Grant
    Filed: May 24, 2017
    Date of Patent: February 12, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nikhil Teletia, Jae Young Do, Kwanghyun Park, Jignesh M. Patel
  • Patent number: 10200889
    Abstract: Disclosed in the present application is a method for a terminal receiving data in a wireless communication system. Specifically, the method comprises the steps of determining at least one transmission subject for the data among one or more auxiliary nodes and a base station; receiving a distributed code from the determined at least one transmission subject; and obtaining the data from the distributed code, wherein the at least one transmission subject is determined based on the sum of the distributed codes stored in the auxiliary nodes and the number of auxiliary nodes existing within a predetermined distance from the terminal.
    Type: Grant
    Filed: March 18, 2016
    Date of Patent: February 5, 2019
    Assignees: LG ELECTRONICS INC., KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Hanbyul Seo, Wan Choi, Dongin Kim, Bi Hong, Hojin Song, Tae Yeong Kim
  • Patent number: 10200410
    Abstract: A round-robin network security system implemented by a number of peer devices included in a plurality of networked peer devices. The round-robin security system permits the rotation of the system security controller among at least a portion of the peer devices. Each of the peer devices uses a defined trust assessment ruleset to determine whether the system security controller is trusted/trustworthy. An untrusted system security controller peer device is replaced by another of the peer devices selected by the peer devices. The current system security controller peer device transfers system threat information and security risk information collected from the peer devices to the new system security controller elected by the peer devices.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: February 5, 2019
    Assignee: Intel Corporation
    Inventors: Michael Hingston McLaughlin Bursell, Stephen T. Palermo, Chris MacNamara, Pierre Laurent, John J. Browne
  • Patent number: 10198397
    Abstract: Two computing devices utilizing remote direct memory access establish a send ring buffer on a sending computer and a receive ring buffer on a receiving computer that mirror one another. A message is copied into the ring buffer on the sending computer and a write edge pointer is updated to identify its end. The message is copied, by the sending computer, from its ring buffer into a ring buffer on the receiving computer. A process executing on the receiving computer periodically checks, at its write edge pointer, and, upon detecting the new message's header, it updates the location identified by the write edge pointer. Once the new message is copied out of the ring buffer at the receiving computer, a trailing edge pointer is updated and a process executing at the sending computer monitors the trailing edge pointer of the receiving computer and updates its own trailing edge pointer accordingly.
    Type: Grant
    Filed: November 18, 2016
    Date of Patent: February 5, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Chen Fu, John Grant Bennett
  • Patent number: 10198318
    Abstract: A nonvolatile memory device includes: a nonvolatile memory including a plurality of physical blocks; and a memory controller configured to execute an internal process of migrating data between physical blocks. The memory controller is configured to select, based on an update frequency level which is identified with respect to a logical address range from a higher-level apparatus, a physical block to be allocated to the logical address range from among the plurality of physical blocks. The memory controller is configured to determine, in the internal process, whether to set a migration destination level (an update frequency level of a migration destination physical block) to a same level as or a different level from a migration source level (an update frequency level of a migration source physical block) based on whether or not an attribute of the migration source physical block satisfies a prescribed condition.
    Type: Grant
    Filed: October 27, 2014
    Date of Patent: February 5, 2019
    Assignee: HITACHI, LTD.
    Inventors: Yoshihiro Oikawa, Hiroshi Hirayama, Kenta Ninose