Buffer Space Allocation Or Deallocation Patents (Class 710/56)
  • Patent number: 10445156
    Abstract: A data processing system arranged for receiving over a network, according to a data transfer protocol, data directed to any of a plurality of destination identities, the data processing system comprising: data storage for storing data received over the network; and a first processing arrangement for performing processing in accordance with the data transfer protocol on received data in the data storage, for making the received data available to respective destination identities; and a response former arranged for: receiving a message requesting a response indicating the availability of received data to each of a group of destination identities; and forming such a response; wherein the system is arranged to, in dependence on receiving the said message.
    Type: Grant
    Filed: May 4, 2016
    Date of Patent: October 15, 2019
    Assignee: Solarflare Communications, Inc.
    Inventors: Steven Leslie Pope, Derek Edward Roberts, David James Riddoch, Greg Law, Steve Grantham, Matthew Slattery
  • Patent number: 10437718
    Abstract: A method of prefetching data is provided including monitoring sequences of memory addresses of data being accessed by a system, whereby sequences of m+1 memory addresses each are continually identified; and for each identified sequence: converting, upon identifying said each sequence, memory addresses of said each sequence into m relative addresses, whereby each of the m relative addresses is relative to a previous memory address in said each sequence, so as to obtain an auxiliary sequence of m relative addresses; upon converting said memory addresses, feeding said auxiliary sequence of m relative addresses as input to a trained machine learning model for it to predict p relative addresses of next memory accesses by the system, where p?1; and prefetching data at memory locations associated with one or more memory addresses that respectively correspond to one or more of the p relative addresses predicted.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: October 8, 2019
    Assignee: International Business Machines Corporation
    Inventors: Andreea Anghel, Peter Altevogt, Gero Dittmann, Cedric Lichtenau
  • Patent number: 10423462
    Abstract: Embodiments of the present invention provide systems and methods for dynamically allocating data to multiple nodes. The method includes determining the usage of multiple buffers and the capability factors of multiple servers. Data is then allocated to multiple buffers associated with multiple active servers, based on the determined usage and capability factors, in order to keep the processing load on the multiple servers balanced.
    Type: Grant
    Filed: February 9, 2016
    Date of Patent: September 24, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mi W. Shum, DongJie Wei, Samuel H. K. Wong, Xin Ying Yang, Xiang Zhou
  • Patent number: 10394641
    Abstract: An apparatus and method are described for handling memory access operations, and in particular for handling faults occurring during the processing of such memory access operations. The apparatus has processing circuitry for executing program instructions that include memory access instructions, and a memory interface for coupling the processing circuitry to a memory system. The processing circuitry is switchable between a synchronous fault handling mode and an asynchronous fault handling mode. When in the synchronous fault handling mode the processing circuitry applies a constraint on execution of the program instructions such that a fault resulting from a memory access operation processed by the memory system will be received by the memory interface before the processing circuitry has allowed program execution to proceed beyond a recovery point for the memory access instruction associated with the memory access operation.
    Type: Grant
    Filed: April 10, 2017
    Date of Patent: August 27, 2019
    Assignee: ARM Limited
    Inventor: Simon John Craske
  • Patent number: 10390256
    Abstract: A communication method according to an embodiment, comprising: transmitting, from a base station to a radio terminal, a list indicating association between an identification of a logical channel group and a priority; generating, by the radio terminal, a buffer status report for direct communication; and transmitting, from the radio terminal to the base station, the buffer status report. The buffer status report includes an index of an destination identifier, an identifier of the logical channel group, and a buffer amount associated with the identifier of the logical channel group. In the generating the buffer status report, the radio terminal generates the buffer status report based on the list.
    Type: Grant
    Filed: November 14, 2017
    Date of Patent: August 20, 2019
    Assignee: KYOCERA Corporation
    Inventors: Takahiro Saiwai, Hiroyuki Adachi, Noriyoshi Fukuta, Masato Fujishiro
  • Patent number: 10298491
    Abstract: In response to a path monitoring task for a particular source/destination pair, a network controller determines whether stored information includes paths for the particular source/destination pair. When the stored information includes paths for the particular source/destination pair, a subset of source ports is selected that covers all the paths for the particular source/destination pair. A probe message is sent to cause an ingress switch to send probe packets using the subset of source ports. Paths for the particular source/destination pair are computed based on received probe packets. A determination is made whether a topology for the data center network has changed by comparing the paths computed based on the receive probe packets for the particular source/destination pair with the paths included in the stored information for the particular source/destination pair.
    Type: Grant
    Filed: August 25, 2016
    Date of Patent: May 21, 2019
    Assignee: Cisco Technology, Inc.
    Inventors: Deepak Kumar, Yi Yang, Carlos M. Pignataro, Nagendra Kumar Nainar
  • Patent number: 10296437
    Abstract: A method is described that includes receiving an application and generating a representation of the application that describes specific states of the application and specific state transitions of the application. The method further includes identifying a region of interest of the application based on rules and observations of the application's execution. The method further includes determining specific stimuli that will cause one or more state transitions within the application to reach the region of interest. The method further includes enabling one or more monitors within the application's run time environment and applying the stimuli. The method further includes generating monitoring information from the one or more monitors. The method further includes applying rules to the monitoring information to determine a next set of stimuli to be applied to the application in pursuit of determining whether the region of interest corresponds to improperly behaving code.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: May 21, 2019
    Assignee: FireEye, Inc.
    Inventors: Osman Abdoul Ismael, Dawn Song, Ashar Aziz, Noah Johnson, Prashanth Mohan, Hui Xue
  • Patent number: 10282293
    Abstract: A memory access method includes: receiving, by the switch, a data packet; matching a flow table on the data packet, where the flow table includes at least one flow entry, where the flow entry includes a matching field and an action field, and the at least one flow entry includes a first flow entry, where a matching field of the first flow entry is used to match source node information, destination node information, and a protocol type in the data packet, and an action field of the first flow entry is used to indicate an operation command for a storage device embedded in the switch; and when the data packet successfully matches the first flow entry, performing an operation on the storage device according to the operation command in the action field of the successfully matched first flow entry.
    Type: Grant
    Filed: May 26, 2017
    Date of Patent: May 7, 2019
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Yidong Tao, Rui He, Xiaowen Dong
  • Patent number: 10282236
    Abstract: Embodiments of the present invention provide systems and methods for dynamically allocating data to multiple nodes. The method includes determining the usage of multiple buffers and the capability factors of multiple servers. Data is then allocated to multiple buffers associated with multiple active servers, based on the determined usage and capability factors, in order to keep the processing load on the multiple servers balanced.
    Type: Grant
    Filed: April 21, 2015
    Date of Patent: May 7, 2019
    Assignee: International Business Machines Corporation
    Inventors: Mi W. Shum, DongJie Wei, Samuel H. K. Wong, Xin Ying Yang, Xiang Zhou
  • Patent number: 10284488
    Abstract: Aggregate socket resource management is presented herein. A system can comprise a processor; and a memory that stores executable instructions that, when executed by the processor, facilitate performance of operations, comprising: determining a present aggregate amount of data associated with processing requests from a socket; setting a defined aggregate data limit on the present aggregate amount of data; and in response to determining changes in a difference between the defined aggregate data limit and the present aggregate amount of data, modifying a defined data capacity limit on a data capacity of a receive buffer of the socket. In an example, the determining of the changes in the difference between the defined aggregate data limit and the present aggregate amount of data comprises reducing/increasing the defined data capacity limit in response to the difference being determined to be decreasing/increasing.
    Type: Grant
    Filed: March 30, 2016
    Date of Patent: May 7, 2019
    Assignee: EMC CORPORATION
    Inventor: John Gemignani, Jr.
  • Patent number: 10255122
    Abstract: Generally, this disclosure provides systems, devices, methods and computer readable media for implementing function callback requests between a first processor (e.g., a GPU) and a second processor (e.g., a CPU). The system may include a shared virtual memory (SVM) coupled to the first and second processors, the SVM configured to store at least one double-ended queue (Deque). An execution unit (EU) of the first processor may be associated with a first of the Deques and configured to push the callback requests to that first Deque. A request handler thread executing on the second processor may be configured to: pop one of the callback requests from the first Deque; execute a function specified by the popped callback request; and generate a completion signal to the EU in response to completion of the function.
    Type: Grant
    Filed: November 24, 2015
    Date of Patent: April 9, 2019
    Assignee: Intel Corporation
    Inventors: Brian T. Lewis, Rajkishore Barik, Tatiana Shpeisman
  • Patent number: 10200485
    Abstract: A system transmits selected news feed stories to a client device in advance of receiving a request for news feed stories. As a result, stories are immediately available for viewing when a user interacts with the system. The system selects news feed stories to push based on criteria such as a likelihood that a user will interact with a story and the sizes of pushed stories. For example, the system selects news feed stories such that a total size of stories selected does not exceed a threshold value based on local memory at the client device. The system may determine a scheduled time at which the stories are selected and pushed. The scheduled time is based on factors including patterns of network connection speed or past user interactions, for example, a time range of the day during which the user most frequently viewed pushed stories.
    Type: Grant
    Filed: April 5, 2016
    Date of Patent: February 5, 2019
    Assignee: Facebook, Inc.
    Inventors: Christopher John Marra, Alexander A. Sourov, Alexandru Petrescu, Syed Shahbaz Ahmed, Lars Seren Backstrom
  • Patent number: 10169154
    Abstract: A system and method for data storage by shredding and deshredding of the data allows for various combinations of processing of the data to provide various resultant storage of the data. Data storage and retrieval functions include various combinations of data redundancy generation, data compression and decompression, data encryption and decryption, and data integrity by signature generation and verification. Data shredding is performed by shredders and data deshredding is performed by deshredders that have some implementations that allocate processing internally in the shredder and deshredder either in parallel to multiple processors or sequentially to a single processor. Other implementations use multiple processing through multi-level shredders and deshredders. Redundancy generation includes implementations using non-systematic encoding, systematic encoding, or a hybrid combination.
    Type: Grant
    Filed: December 1, 2014
    Date of Patent: January 1, 2019
    Assignee: International Business Machines Corporation
    Inventors: Douglas R. de la Torre, David W. Young
  • Patent number: 10089038
    Abstract: First in, first out (FIFO) memory queue architecture enabling a plurality of writers and a single reader to use the queue without mutual exclusive locking. The FIFO queue is implemented using an array. A write counter value associated with the array provides a reservation value to each writer that is mutually exclusive of the value provided to every other writer. A read counter value associated with the array prevents writers from writing over data messages stored in the array that are yet to be read by the single reader.
    Type: Grant
    Filed: June 27, 2016
    Date of Patent: October 2, 2018
    Assignee: Schneider Electric Software, LLC
    Inventors: Rade Ranković, Collin Miles Roth
  • Patent number: 10082977
    Abstract: A computer-implemented method for storing data in a storage area, includes: storing a first data unit in a first area of the storage area, in response to a request to store a first data unit having a first attribute in the storage area, when at least one data unit having the first attribute is stored in the first area; and generating, a second area by reducing the first area, when no data unit having the second attribute is stored in the storage area in response to a request to store a second data unit having a second attribute in the storage area; furthermore, storing the second data unit in the second area.
    Type: Grant
    Filed: November 2, 2017
    Date of Patent: September 25, 2018
    Assignee: International Business Machines Corporation
    Inventors: Katsuyoshi Katori, Yutaka Oishi, Eiji Tosaka
  • Patent number: 10051292
    Abstract: An apparatus includes a circuit and a processor. The circuit may be configured to (i) generate a plurality of sets of coefficients by compressing a tile in a picture in a video signal at each of a plurality of different sizes of a plurality of coding units in a coding tree unit and (ii) reconstruct the tile based on a particular one of the sets of coefficients. The sets of coefficients may be generated at two or more of the different sizes of the coding units in parallel. Each of the sets of coefficients may be generated in a corresponding one of a plurality of pipelines that operate in parallel. Each of the sets of coefficients may have a same number of the coefficients. The processor may be configured to select the particular set of coefficients in response to the compression of the tile.
    Type: Grant
    Filed: January 30, 2017
    Date of Patent: August 14, 2018
    Assignee: Ambarella, Inc.
    Inventors: Leslie D. Kohn, Ellen M. Lee, Peter Verplaetse
  • Patent number: 10031808
    Abstract: A mechanism is provided in a data processing system. The mechanism determines a maximum queue depth of a queue for each solid state drive in a plurality of solid state drives. A given data block is mirrored between a group of solid state drives within the plurality of solid state drives. The mechanism tracks outstanding input/output operations in a queue for each of the plurality of solid state drives. For a given read operation to read the given data block, the mechanism identifies a solid state drive within the group of solid state drives based on a number of empty slots in the queue of each solid state drive within the group of solid state drives.
    Type: Grant
    Filed: June 13, 2016
    Date of Patent: July 24, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael T. Benhase, Andrew D. Walls
  • Patent number: 9998387
    Abstract: Some demonstrative embodiments include apparatuses, systems and/or methods of controlling data flow over a communication network. For example, an apparatus may include a communication unit to control the transfer of a stream of data from a first device to a second device over a communication link, the stream of data including data to be delivered to a plurality of endpoints. For example, the controlling may include communicating between the first and second devices at least one message including at least one endpoint-specific credit consumption unit (CCU) defined with respect to at least one endpoint of the plurality of endpoints.
    Type: Grant
    Filed: December 24, 2015
    Date of Patent: June 12, 2018
    Assignee: INTEL CORPORATION
    Inventors: Bahareh Sadeghi, Elad Levy, Rafal Wielicki, Marek Dabek, Oren Kedem
  • Patent number: 9946657
    Abstract: Systems for managing a multi-level cache in high-performance computing. A method is practiced over a multi-tier caching subsystem that comprises a first cache tier of random access memory, and a second cache tier that comprises a block-oriented device. The solid-state drive device is a block-oriented device comprising a plurality of blocks having a minimum block size. Cache entries are initially stored in the first cache, including cache entries that are smaller than the minimum block size of the block-oriented device. During cache operations such as first tier eviction, a plurality smaller entries are packed into blocks of the minimum block size before being spilled into the second tier. If an entry in the packed block is accessed again, the entire packed block is brought into the first tier. A key structure is maintained to track individual invalidated entries in a packed block without invalidating other entries in the packed block.
    Type: Grant
    Filed: March 1, 2016
    Date of Patent: April 17, 2018
    Assignee: Nutanix, Inc.
    Inventors: Kannan Muthukkaruppan, Neil Le
  • Patent number: 9940273
    Abstract: Dynamic sharing of RAM in a software-defined communication system includes storing program code in a flash memory, categorizing parts of the code into groups of transmit categories according to when a part of the code needs to be copied into a section of a RAM and then executed during a first state of a TX state machine and according to how another part of the code can be later fit into the same section and then executed during a second state. Similarly, parts of the code are categorized into groups of receive categories according to when a part of the code needs to be copied into a section of RAM and then executed during a first state of a RX state machine and according to how another part of the code can be later fit into that section and then executed during a second state of the RX state machine, to reduce the amount of RAM without sacrificing speed performance.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: April 10, 2018
    Assignee: Texas Instruments Incorporated
    Inventors: Wenxun Qiu, Minghua Fu
  • Patent number: 9866539
    Abstract: Disclosed are systems and methods for protecting transmission of audio data from microphone to application process. An exemplary method includes receiving a request from a software process to obtain an audio stream from an audio endpoint device; allocating a data buffer for the software process; processing and encrypting audio data received from the audio endpoint device by audio processing objects; storing the encrypted audio data in the allocated data buffer; installing an interceptor of a API function call for the software process; and decrypting the encrypted audio data from the allocated data buffer by the software process using the interceptor of the API function call.
    Type: Grant
    Filed: July 5, 2016
    Date of Patent: January 9, 2018
    Assignee: AO Kaspersky Lab
    Inventors: Vyacheslav I. Levchenko, Alexander V. Kalinin
  • Patent number: 9836237
    Abstract: A computer-implemented method for storing data in a storage area, includes: storing a first data unit in a first area of the storage area, in response to a request to store a first data unit having a first attribute in the storage area, when at least one data unit having the first attribute is stored in the first area; and generating, a second area by reducing the first area, when no data unit having the second attribute is stored in the storage area in response to a request to store a second data unit having a second attribute in the storage area; furthermore, storing the second data unit in the second area.
    Type: Grant
    Filed: November 9, 2015
    Date of Patent: December 5, 2017
    Assignee: International Business Machines Corporation
    Inventors: Katsuyoshi Katori, Yutaka Oishi, Eiji Tosaka
  • Patent number: 9826321
    Abstract: Method for providing sound to at least one user, involves supplying audio signals from an audio signal source to a transmission unit; compressing the audio signals to generate compressed audio data; transmitting compressed audio data from the transmission unit to at least one receiver unit; decompressing the compressed audio data to generate decompressed audio signals; and stimulating the hearing of the user(s) according to decompressed audio signals supplied from the receiver unit. During certain time periods, transmission of compressed audio data is interrupted, and instead, at least one control data block is generated by the transmission unit in such a manner that audio data transmission is replaced by control data block transmission, thereby temporarily interrupting flow of received compressed audio data, each control data block includes a marker recognized by the at least one receiver unit as a control data block and a command for control of the receiver unit.
    Type: Grant
    Filed: May 8, 2017
    Date of Patent: November 21, 2017
    Assignee: Sonova AG
    Inventors: Amre El-Hoiydi, Marc Secall
  • Patent number: 9817700
    Abstract: A method, computer program product, and system for dynamically distributing data for parallel processing in a computing system, comprising allocating a data buffer to each of a plurality of data partitions, where each data buffer stores data to be processed by its corresponding data partition, distributing data in multiple rounds to the data buffers for processing by the data partitions, where in each round the data is distributed based on a determined data processing capacity for each data partition, and where a greater amount of data is distributed to the data partitions with higher determined processing capacities, and periodically monitoring usage of each data buffer and re-determining the determined data processing capacity of each data partition based on its corresponding data buffer usage.
    Type: Grant
    Filed: April 26, 2011
    Date of Patent: November 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian K. Caufield, Fan Ding, Mi Wan Shum, Dong Jie Wei, Samuel H K Wong
  • Patent number: 9811384
    Abstract: A method, computer program product, and system for dynamically distributing data for parallel processing in a computing system, comprising allocating a data buffer to each of a plurality of data partitions, where each data buffer stores data to be processed by its corresponding data partition, distributing data in multiple rounds to the data buffers for processing by the data partitions, where in each round the data is distributed based on a determined data processing capacity for each data partition, and where a greater amount of data is distributed to the data partitions with higher determined processing capacities, and periodically monitoring usage of each data buffer and re-determining the determined data processing capacity of each data partition based on its corresponding data buffer usage.
    Type: Grant
    Filed: June 27, 2012
    Date of Patent: November 7, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Brian K. Caufield, Fan Ding, Mi Wan Shum, Dong Jie Wei, Samuel H K Wong
  • Patent number: 9804995
    Abstract: This disclosure describes techniques for extending the architecture of a general purpose graphics processing unit (GPGPU) with parallel processing units to allow efficient processing of pipeline-based applications. The techniques include configuring local memory buffers connected to parallel processing units operating as stages of a processing pipeline to hold data for transfer between the parallel processing units. The local memory buffers allow on-chip, low-power, direct data transfer between the parallel processing units. The local memory buffers may include hardware-based data flow control mechanisms to enable transfer of data between the parallel processing units. In this way, data may be passed directly from one parallel processing unit to the next parallel processing unit in the processing pipeline via the local memory buffers, in effect transforming the parallel processing units into a series of pipeline stages.
    Type: Grant
    Filed: January 14, 2011
    Date of Patent: October 31, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Alexei V. Bourd, Andrew Gruber, Aleksandra L. Krstic, Robert J. Simpson, Colin Sharp, Chun Yu
  • Patent number: 9792196
    Abstract: A method is described that includes receiving an application and generating a representation of the application that describes specific states of the application and specific state transitions of the application. The method further includes identifying a region of interest of the application based on rules and observations of the application's execution. The method further includes determining specific stimuli that will cause one or more state transitions within the application to reach the region of interest. The method further includes enabling one or more monitors within the application's run time environment and applying the stimuli. The method further includes generating monitoring information from the one or more monitors. The method further includes applying rules to the monitoring information to determine a next set of stimuli to be applied to the application in pursuit of determining whether the region of interest corresponds to improperly behaving code.
    Type: Grant
    Filed: November 2, 2015
    Date of Patent: October 17, 2017
    Assignee: FireEye, Inc.
    Inventors: Osman Abdoul Ismael, Dawn Song, Ashar Aziz, Noah Johnson, Prshanth Mohan, Hui Xue
  • Patent number: 9742776
    Abstract: Systems and techniques are disclosed for receiving one or more recipient identifiers and a destination location from a user or an application. A uniform resource locator may be generated and may include a destination ID corresponding to the destination location. An entry containing the one or more recipient identifiers may be generated in an access control list for the destination location. A recipient may request access to the destination location by selecting the uniform resource locator. A recipient identifier may be determined for the recipient requesting the access and may be compared to entries in the access control list. If the recipient identifier matches an entry in the access control list, then the recipient may be granted access to the destination location.
    Type: Grant
    Filed: September 13, 2014
    Date of Patent: August 22, 2017
    Assignee: GOOGLE INC.
    Inventors: Justin Lewis, Ruxandra Georgiana Davies
  • Patent number: 9704583
    Abstract: A memory system includes a memory device including a plurality of memory chips, each of which includes a plurality of planes suitable for storing data and a plurality of page buffers respectively corresponding to the planes; and a controller suitable for transferring write data stored in a write buffer thereof to a first page buffer of a first chip, releasing the write buffer and a first plane corresponding to the first page buffer in the first chip after the transfer to the first page buffer, and programming the write data in the first planes after the release from the first plane.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: July 11, 2017
    Assignee: SK Hynix Inc.
    Inventor: Min-O Song
  • Patent number: 9654822
    Abstract: An electronic device displays a first video stream on a display. While displaying the first video stream on the display, the device allocates, in accordance with a historical pattern of video stream switching of a particular user, available bandwidth for receiving data at the device at least between receiving the first video stream and preloading a second, non-displayed video stream. The device receives the first video stream and preloads the second, non-displayed video stream in accordance with the allocated available bandwidth. The device receives a request to display the second video stream on the display. In response to receiving the request to display the second video stream on the display, the device displays the preloaded second video stream on the display.
    Type: Grant
    Filed: July 21, 2015
    Date of Patent: May 16, 2017
    Assignee: SPOTIFY AB
    Inventors: Eric Hoffert, Mike Berkley, Kevin Faaborg, Gustav Soderstrom
  • Patent number: 9619372
    Abstract: Embodiments of the present disclosure relate to methods and systems for hybrid testing, combining the optimization features of functional testing brought forth to security testing. One disclosed method may include receiving a list of input points associated with a software unit under test and assigning, by a processor, risk values to the input points based on one or more risk rating factors. The risk values may reflect security risk associated with the input points. The method may further include providing, to the software unit under test, input values indicative of a functional test for input points assigned values reflecting a low security risk and input values indicative of a security test for input points assigned values reflecting a high security risk. The method may further include executing a security test for the software unit under test using the input values.
    Type: Grant
    Filed: February 10, 2015
    Date of Patent: April 11, 2017
    Assignee: WIPRO LIMITED
    Inventor: Sourav Sam Bhattacharya
  • Patent number: 9596470
    Abstract: An apparatus having a circuit and a processor is disclosed. The circuit may be configured to (i) generate a plurality of sets of coefficients by compressing a block in a picture in a video signal at a plurality of different sizes of coding units in a coding tree unit and (ii) generate an output signal by entropy encoding a particular one of the sets of coefficients. Each set of coefficients may be generated in a corresponding one of a plurality of pipelines that operate in parallel. The processor may be configured to select the particular set of coefficients in response to the compressing.
    Type: Grant
    Filed: October 8, 2013
    Date of Patent: March 14, 2017
    Assignee: Ambarella, Inc.
    Inventors: Leslie D. Kohn, Ellen M. Lee, Peter Verplaetse
  • Patent number: 9591625
    Abstract: A technique for allocating a uplink data volume 504 to uplink data pending for transmission in a telecommunications device is provided. As to a method aspect of the technique, a grant of the uplink data volume is received. A portion 502 of the granted uplink data volume is reserved for transmission of a buffer status report, which is to be provided by a Data Link layer 300 of the telecommunications device. A size of the buffer status report depends on a number of channels for which uplink data is pending. If an unreserved portion 503 of the granted uplink data volume is not sufficiently sized for the pending uplink data, the unreserved portion and at least a part of the reserved portion are allocated to at least a portion of the pending uplink data when the allocation corresponds to a reduction of a number of channels for which uplink data is pending so that the buffer status report is at least reduced in size.
    Type: Grant
    Filed: July 11, 2012
    Date of Patent: March 7, 2017
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Hans Juergen Leicht, Joerg Armbruster, Gerhard Hierl, Christian Hofmann
  • Patent number: 9558122
    Abstract: In an example implementation, a method includes receiving an indication to reclaim memory from a cache, the cache including a plurality of data buckets each configured to store one or more records and corresponding access bits. The method also includes selecting a data bucket from the cache, and processing the selected data bucket. Processing the selected data bucket includes determining access bits of the selected data bucket that are clear, and expunging data records corresponding to those access bits from the cache. Processing the selected data bucket also includes determining access bits of the selected data bucket that are set and do not correspond to records relevant to outstanding requests by a process utilizing the cache, and clearing those access bits. The method also includes repeating selecting and processing data buckets until a stop criterion is satisfied.
    Type: Grant
    Filed: May 29, 2014
    Date of Patent: January 31, 2017
    Assignee: Apple Inc.
    Inventor: Kristen A. McIntyre
  • Patent number: 9531647
    Abstract: A packet processor provides for rule matching of packets in a network architecture. The packet processor includes a lookup cluster complex having a number of lookup engines and respective on-chip memory units. The on-chip memory stores rules for matching against packet data. A lookup front-end receives lookup requests from multiple hosts, manages traffic among the hosts, and processes these lookup requests to generate key requests for forwarding to the lookup engines. As a result of the rule matching, the lookup engine returns a response message indicating whether a match is found. The lookup front-end further processes the response message and provides a corresponding response to the host.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: December 27, 2016
    Assignee: Cavium, Inc.
    Inventors: Rajan Goyal, Gregg A. Bouchard, Troy S. Dahlmann, Jeffrey Richard Hardesty, Karen A. Szypulski
  • Patent number: 9507723
    Abstract: A method for dynamically adjusting a cache buffer of a solid state drive includes receiving data, determine if the data are continuous according to logical allocation addresses of the data, increasing a memory size of the cache buffer, searching the cache buffer for same data as at least one portion of the data, modifying and merging of the at least one portion of the data with the same data already temporarily stored in the cache buffer, temporarily storing the data in the cache buffer.
    Type: Grant
    Filed: March 26, 2015
    Date of Patent: November 29, 2016
    Assignee: QUANTA STORAGE INC.
    Inventors: Cheng-Yi Lin, Yi-Long Hsiao
  • Patent number: 9491212
    Abstract: Embodiments provide a method for streaming media and a media controller. The method includes: receiving, by a media controller, a media streaming request sent by a user equipment, and allocating an index to the user equipment, wherein the index is used to indicate an address of a corresponding buffer in the media controller to which data to be streamed is stored; binding the streaming request and the index of the user equipment and storing in a table, and sending the them to a media server so that the media server controls, according to the table, a storage device to send the data to be streamed to an address of a buffer corresponding to the index; and receiving then streaming the data to be streamed that is requested by the streaming request to the corresponding user equipment by querying the table.
    Type: Grant
    Filed: May 16, 2013
    Date of Patent: November 8, 2016
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Long Jiang, Xiaodong Zheng, Hengzong Yang
  • Patent number: 9454551
    Abstract: A method of garbage collection in a storage device including a central processing unit (CPU), a self-organized fast release buffer (FRB), and a non-volatile memory, the method including receiving a command to perform garbage collection in a first block stripe of the non-volatile memory from the CPU, the command including a second block stripe to write to and valid logical block numbers (LBNs) corresponding to a first codewords (CWs) stored in the first block stripe, allocating space in a buffer memory of the FRB for storage of the first CWs, storing the first CWs into the allocated space in the buffer memory, transferring a second CWs to a plurality of physical addresses in the second block stripe of the non-volatile memory, and sending the valid LBNs and the plurality of physical addresses to the CPU to update a logical-to-physical table, wherein the second CWs is based on the first CWs.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: September 27, 2016
    Assignee: NXGN Data, Inc.
    Inventors: Joao Alcantara, Vladimir Alves
  • Patent number: 9424095
    Abstract: A system, method and computer program product for controlling the processing of requests for web page resources from a web server are provided. The method comprises monitoring a run level of the web server; receiving requests for one or more web page resources; determining a priority of received requests based on a run level value associated with a requested resource and the run level of the web server; and processing the requests by the web server according to the determined priority. In dependence on the current load on the web server, requests for low priority resources can be given a low processing priority, with processing capability focussed on requests for higher priority web resources.
    Type: Grant
    Filed: November 21, 2006
    Date of Patent: August 23, 2016
    Assignee: International Business Machines Corporation
    Inventors: Adam Coulthard, Daniel Edward Would
  • Patent number: 9401875
    Abstract: A packet transfer processing device includes common processing units that perform processing common to inbound processing of a packet received from an access network for transfer to a core network and outbound processing of a packet received from the core network for transfer to the access network, an input destination switching unit that selects common processing units to which the received packets are to be input, an output destination switching unit that outputs packets processed by the common processing units to a destination network, an individual processing switching unit that selects a common processing unit to connect to an individual processing unit that performs individual processing not performed by the common processing units as part of inbound processing, and a control unit that controls the input destination switching unit, the individual processing switching unit, and switching supply/shutoff of power to the common processing units.
    Type: Grant
    Filed: May 31, 2013
    Date of Patent: July 26, 2016
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Sadayuki Yasuda, Masami Urano, Tsugumichi Shibata
  • Patent number: 9395918
    Abstract: In one embodiment, a computer program product for modifying a virtual storage access method (VSAM) data set during open time, the computer program product including a computer readable storage medium having computer readable program code embodied therewith, the embodied computer readable program code including computer readable program code configured to open a VSAM data set, and computer readable program code configured to modify a VSAM control block structure for the VSAM data set while the VSAM data set is open during an open time in which static data set characteristics and/or job parameters have been defined for the VSAM data set, wherein the computer readable program code configured to modify the VSAM control block structure includes computer readable program code configured to interact with the VSAM data set within a VSAM dynamic address space using at least one of: a VSAM console interface and a VSAM programming interface.
    Type: Grant
    Filed: December 18, 2014
    Date of Patent: July 19, 2016
    Assignee: International Business Machines Corporation
    Inventors: Kam H. Ho, Maya P. Pandya
  • Patent number: 9389794
    Abstract: A method and system for managing consistent data objects are included herein. The method includes detecting an operation to store a consistent data object. Additionally, the method includes detecting an attribute for the consistent data object. Furthermore, the method includes storing the consistent data object based on the attribute. In addition, the method includes determining an additional format of the consistent data object is to be stored. The method also includes generating a second consistent data object based on the additional format and storing the second consistent data object.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: July 12, 2016
    Assignee: Intel Corporation
    Inventors: Scott A. Krig, Stewart N. Taylor
  • Patent number: 9372815
    Abstract: Techniques for estimating processor load by using queue depth information of a peripheral adapter provides processor loading information that can be used to adapt interrupt latency to improve performance in a processing system. A mathematical function of the depth of one or more queues of the adapter is compared to its historical value in order to provide an estimate of processor load. The estimated processor load can then be used to set a parameter that controls the frequency of an interrupt generator. The mathematical function may be the ratio of the transmit queue depth to the receive queue depth and the historical value may be predetermined, user-settable, obtained during a calibration interval or obtained by taking a long-term average of the mathematical function of the queue depths.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: June 21, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Vaijayanthimala K. Anand, Janice Marie Girouard, Emily Jane Ratliff
  • Patent number: 9354822
    Abstract: A method of reading host data from a storage device including a central processing unit (CPU), a self-organized fast release buffer (FRB), and a non-volatile memory, the storage device being in communication with a host, the method including receiving, by the FRB, a command to read host data stored in the non-volatile memory from the CPU, the host data being stored in the non-volatile memory as one or more codewords (CWs), allocating space, by the FRB, in a buffer memory of the FRB for storage of the one or more CWs, storing, by the FRB, the one or more CWs into the allocated space in the buffer memory, extracting, by the FRB, the host data from the stored one or more CWs, and transferring, by the FRB, the host data to the host.
    Type: Grant
    Filed: March 13, 2014
    Date of Patent: May 31, 2016
    Assignee: NXGN Data, Inc.
    Inventors: Joao Alcantara, Vladimir Alves
  • Patent number: 9342422
    Abstract: Instead of disabling PCI communication between system resources in a host computing device and I/O devices when a PCI Host Bridge (PHB) is reset, the host computing device may include a PCI communication path for maintaining communication between the system resources and the I/O devices. In one embodiment, the redundant PCI communication path includes a second PHB that is maintained in a standby state. The host may monitor the errors generated by a plurality of master PHBs and select a master PHB that satisfies an error threshold. The second PHB (i.e., a servant PHB) and the selected master PHB are synchronized, and the second PHB is coupled to the PCI communication path between the master PHB and a PCI switch. The master PHB can then be reset while the second PHB maintains PCI communication between the host and the I/O devices.
    Type: Grant
    Filed: November 7, 2013
    Date of Patent: May 17, 2016
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Jesse P. Arroyo, Anjan Kumar Guttahalli Krishna
  • Patent number: 9323473
    Abstract: A virtual tape library includes a local, non-tape based storage, a store to store a data structure which associates emulated tape storage elements with the local storage or a remote storage, a first interface to provide access to the local storage, in response to tape library commands identifying emulated tape storage elements associated with the local storage, and a second interface to provide access to the remote storage, in response to tape library commands identifying emulated tape storage elements associated with the remote storage.
    Type: Grant
    Filed: January 9, 2009
    Date of Patent: April 26, 2016
    Assignee: Hewlett Packard Enterprise Development LP
    Inventors: Alastair Slater, Simon Pelly
  • Patent number: 9311044
    Abstract: A system and method can support input/output (I/O) virtualization in a computing environment. The system can comprise a free buffer pool in a memory. An I/O device operates to use the free buffer pool to store disk read data received from a physical host bus adaptor (HBA). The free buffer pool can contain a two-dimensional linked list and a one-dimensional linked list. Each entry of the two-dimensional linked list contains multiple packet buffers in consecutive memory locations, and each entry of the one-dimensional linked list contains a single packet buffer.
    Type: Grant
    Filed: December 4, 2013
    Date of Patent: April 12, 2016
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventor: Uttam Aggarwal
  • Patent number: 9311227
    Abstract: Methods and devices are provided for memory management. One embodiment includes creating a memory control block including a number of sub-blocks, where the number of sub-blocks are capable of storing at least one data structure in a memory device. The method also includes scanning the control block for a free-able data structure having a defined data structure property, marking the free-able data structure as free-able in a bit map, and de-allocating the free-able data structure.
    Type: Grant
    Filed: October 31, 2006
    Date of Patent: April 12, 2016
    Assignee: HEWLETT PACKARD ENTERPRISE DEVELOPMENT LP
    Inventors: Ballard C. Bare, Shaun K. Wakumoto
  • Patent number: 9304804
    Abstract: A first virtual machine executing in a first computer server is replicated to a second virtual machine executing in a second computer server, which is connected to the first computer server over a network. Virtual disks of the first virtual machine are transmitted to the second server, where each transmitted virtual disk corresponds to one of the virtual disks of the second virtual machine, the virtual disks of the first virtual machine having a format different from the format of the virtual disks of the second virtual machine. A plurality of updates to the virtual disks of the first virtual machine is captured, and contiguous data blocks from the virtual disks of the first virtual machine that are subject to the captured updates are identified. The identified contiguous data blocks are then transmitted to the second server for storage in the virtual disks of the second virtual machine.
    Type: Grant
    Filed: October 14, 2013
    Date of Patent: April 5, 2016
    Assignee: VMware, Inc.
    Inventors: Ivan Ivanov, Ivan Velevski
  • Patent number: 9298375
    Abstract: Techniques are disclosed for reducing perceived read latency. Upon receiving a read request with a scatter-gather array from a guest operating system running on a virtual machine (VM), an early read return virtualization (ERRV) component of a virtual machine monitor fills the scatter-gather array with data from a cache and data retrieved via input-output requests (IOs) to media. The ERRV component is configured to return the read request before all IOs have completed based on a predefined policy. Prior to returning the read, the ERRV component may unmap unfilled pages of the scatter-gather array until data for the unmapped pages becomes available when IOs to the external media complete. Later accesses to unmapped pages will generate page faults, which are handled by stunning the VMs from which the access requests originated until, e.g., all elements of the SG array are filled and all pages of the SG array are mapped.
    Type: Grant
    Filed: February 27, 2013
    Date of Patent: March 29, 2016
    Assignee: VMware, Inc.
    Inventors: Erik Cota-Robles, Thomas A. Phelan