User Data Cache Patents (Class 711/126)
  • Patent number: 11853319
    Abstract: Updates to an immutable log may be cached. An immutable log may be stored in a non-volatile storage and an end portion of the immutable log may be stored in a volatile storage as a cache. Reads to obtain records from the end portion of the log may be obtained from the cache instead of the non-volatile storage if the requested records are present in the cache.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: December 26, 2023
    Assignee: Amazon Technologies, Inc.
    Inventors: Jaemyung Kim, Ashwin Venkatesh Raman, Dieu Quang La
  • Patent number: 11849005
    Abstract: Disclosed herein are a method and apparatus for accelerating network transmission in a memory-disaggregated environment. The method for accelerating network transmission in a memory-disaggregated environment includes copying transmission data to a transmission buffer of the computing node, when a page fault occurs during copy of the transmission data, identifying a location at which the transmission data is stored, setting a node in which a largest amount of transmission data is stored as a transmission node, and sending a transmission command to the transmission node.
    Type: Grant
    Filed: December 6, 2022
    Date of Patent: December 19, 2023
    Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, SYSGEAR CO., LTD.
    Inventors: Kwang-Won Koh, Kang-Ho Kim, Chang-Dae Kim, Tae-Hoon Kim, Sang-Ho Eom
  • Patent number: 11586266
    Abstract: Data may be transferred from a volatile memory to a non-volatile memory using a persistent power enabled on-chip data processor upon detecting a power loss from a primary power source. The one or more emergency power supplies are attached to the volatile memory, the non-volatile memory, and the persistent power enabled on-chip data processor to assist with the transferring of data.
    Type: Grant
    Filed: July 28, 2021
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Bulent Abali, Alper Buyuktosunoglu
  • Patent number: 11513914
    Abstract: Methods, systems and computer program products for high-availability computing. In a computing configuration comprising a primary node, a first backup node, and a second backup node, a particular data state is restored to the primary node from a backup snapshot at the second backup node. Firstly, a snapshot coverage gap is identified between a primary node snapshot at the primary node and the backup snapshot at the second backup node. Next, intervening snapshots at the first backup node that fills the snapshot coverage gap are identified and located. Having both the backup snapshot from the second backup node and the intervening snapshots from the first backup node, the particular data state at the primary node is restored by performing differencing operations between the primary node snapshot, the backup snapshot from the second backup node, and the intervening snapshots of the first backup node.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: November 29, 2022
    Assignee: Nutanix, Inc.
    Inventors: Abhishek Gupta, Brajesh Kumar Shrivastava
  • Patent number: 11449570
    Abstract: A data caching method comprises: after receiving a data request sent by a client, determining a remaining valid cache duration of cache data corresponding to the data request; determining whether the remaining valid cache duration is greater than a preset update threshold value; and if the remaining valid cache duration is less than or equal to the update threshold value, updating the cache data through a database.
    Type: Grant
    Filed: September 28, 2018
    Date of Patent: September 20, 2022
    Assignee: ANT WEALTH (SHANGHAI) FINANCIAL INFORMATION SERVICES CO., LTD.
    Inventors: Lingyu Wang, Yamin Li
  • Patent number: 11372758
    Abstract: Embodiments of a system for dynamic reconfiguration of cache are disclosed. Accordingly, the system includes a plurality of processors and a plurality of memory modules executed by the plurality of processors. The system also includes a dynamic reconfigurable cache comprising of a multi-level cache implementing a combination of an L1 cache, an L2 cache, and an L3 cache. The one or more of the L1 cache, the L2 cache, and the L3 cache are dynamically reconfigurable to one or more sizes based at least in part on an application data size associated with an application being executed by the plurality of processors. In an embodiment, the system includes a reconfiguration control and distribution module configured to perform dynamic reconfiguration of the dynamic reconfigurable cache based on the application data size.
    Type: Grant
    Filed: May 12, 2020
    Date of Patent: June 28, 2022
    Assignee: Jackson State University
    Inventors: Khalid Abed, Tirumale Ramesh
  • Patent number: 11360858
    Abstract: An example method includes receiving, at a server, a request for a list of a first group of index records that correspond to stored data, creating a data streaming session with a client, reading an index file and obtaining a list of the first group of index records from the index file, populating a content cache with a signature that corresponds to the first group of index records, creating a sliding window and populating it with the signature and a pointer to a next group of index records, populating an attribute cache with data streaming session information and the sliding window, creating a continuation token which, when received by the server from the client, indicates to the server that the list of the first group of index records has been received by the client.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: June 14, 2022
    Assignee: EMC IP Holding Company LLC
    Inventors: Priyank Tiwari, Hari Sharma
  • Patent number: 11334489
    Abstract: A method for providing elastic columnar cache includes receiving cache configuration information indicating a maximum size and an incremental size for a cache associated with a user. The cache is configured to store a portion of a table in a row-major format. The method includes caching, in a column-major format, a subset of the plurality of columns of the table in the cache and receiving a plurality of data requests requesting access to the table and associated with a corresponding access pattern requiring access to one or more of the columns. While executing one or more workloads, the method includes, for each column of the table, determining an access frequency indicating a number of times the corresponding column is accessed over a predetermined time period and dynamically adjusting the subset of columns based on the access patterns, the maximum size, and the incremental size.
    Type: Grant
    Filed: July 20, 2020
    Date of Patent: May 17, 2022
    Assignee: Google LLC
    Inventors: Anjan Kumar Amirishetty, Xun Cheng, Viral Shah
  • Patent number: 11151054
    Abstract: A central processing unit (CPU) sets a cache lookup operation to a first mode in which the CPU searches a cache and only performs an address translation in response to a cache miss. The CPU performs the cache lookup operation while in the first mode using an address that results in a cache miss. Responsive to the CPU detecting the cache miss, the CPU sets the cache lookup operation from the first mode to a second mode in which the CPU concurrently searches the cache and performs an address translation. The CPU performs a cache lookup operation while in the second mode using a second address that results in a cache hit. Responsive to detecting the cache hit, the CPU sets the cache lookup operation from the second mode to the first mode. This process repeats in cycles upon detection of cache hits and misses.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: October 19, 2021
    Assignee: International Business Machines Corporation
    Inventors: Naga P. Gorti, Mohit Karve
  • Patent number: 11151057
    Abstract: A method for managing data includes generating, by an offload device, predicted active logical partition data using an active logical partition mapping obtained from a host computing device, generating logical partition correlation data using active memory track maps obtained from the host computing device, generating most probable tracks using the predicted active logical partition data and the logical partition correlation data, and sending the most probable tracks to the host computing device, wherein the host computing device evicts data from a memory device based on the most probable tracks.
    Type: Grant
    Filed: December 6, 2019
    Date of Patent: October 19, 2021
    Assignee: EMC IP Holding Company LLC
    Inventors: Jonathan I. Krasner, Jason Jerome Duquette
  • Patent number: 11093198
    Abstract: A mobile electronic device for forwarding a user input to an application is provided. The mobile electronic device includes a connector, a touch screen display, a memory including a first area and a second area, a wireless communication circuit, and at least one processor. The at least one processor is configured to execute a first application included in the first area of the memory, display a screen of the first application on the touch screen display, execute a second application included in the second area of the memory when being connected to an external interface device through the connector, transmit data related to a screen of the second application through the connector to the external interface device, and forward a first user input through the touch screen display to the first application and not forward to the second application.
    Type: Grant
    Filed: December 12, 2018
    Date of Patent: August 17, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventor: Jeong Pyo Lee
  • Patent number: 11068501
    Abstract: A distributed database system may perform a single phase commit for transactions involving updates to multiple databases of the distributed database system. A client request may be received that involves updates to multiple database of the distributed database system. The updates may be performed at a front-end database and a back-end database. Log records indicating the updates to the front-end database may be sent to the back-end database. The log records and the updates performed at the back-end database may be committed together as a single phase commit at the back-end database. In the event of a system failure of the front-end database, log records may be requested and received from the back-end database. A restoration of the front-end database may be performed based, at least in part, on the received log records.
    Type: Grant
    Filed: March 20, 2017
    Date of Patent: July 20, 2021
    Assignee: Amazon Technologies, Inc.
    Inventors: Anurag Windlass Gupta, Jakub Kulesza, Don Johnson, Deepak Agarwal, Tushar Jain
  • Patent number: 11030101
    Abstract: A cache memory and method of operating a cache memory are provided. The cache memory comprises cache storage that stores cache lines for a plurality of requesters and cache control circuitry that controls insertion of a cache line into the cache storage when a memory access request from one of the plurality of requesters misses in the cache memory. The cache memory further has cache occupancy estimation circuitry that holds a count of insertions of cache lines into the cache storage for each of the plurality of requesters over a defined period. The count of cache line insertions for each requester thus provides an estimation of the cache occupancy associated with each requester.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: June 8, 2021
    Assignee: ARM Limited
    Inventors: Ali Saidi, Prakash S. Ramrakhyani
  • Patent number: 10983829
    Abstract: Apparatus and methods are disclosed, including using a memory controller to track a maximum logical saturation over the lifespan of the memory device, where logical saturation is the percentage of capacity of the memory device written with data. A portion of a pool of memory cells of the memory device is reallocated from single level cell (SLC) static cache to SLC dynamic cache storage based at least in part on a value of the maximum logical saturation, the reallocating including writing at least one electrical state to a register, in some examples.
    Type: Grant
    Filed: July 12, 2019
    Date of Patent: April 20, 2021
    Assignee: Micron Technology, Inc.
    Inventors: Xiangang Luo, Jianmin Huang
  • Patent number: 10904205
    Abstract: A CDN traffic is optimized by a client-side system that maps the servers in the CDN system. Content requests from client devices for domain names are forwarded to servers in the CDN system that may be selected from the map to prevent a cache miss in the a server for a particular request for content.
    Type: Grant
    Filed: January 31, 2020
    Date of Patent: January 26, 2021
    Assignee: salesforce.com, inc.
    Inventors: Shauli Gal, Satish Raghunath, Kartikeya Chandrayana
  • Patent number: 10901855
    Abstract: Methods, systems, and computer program products are provided. Tenant data of a multitenant relational database system is backed up by adding a value of a current version identifier for the tenant data to previous valid version identifiers for the tenant data, and changing the value of the current version identifier for the tenant data to a next previously-unused value. The tenant data is restored by changing the value of the current version identifier to a value of one of the previous valid version identifiers, and deleting, from the previous valid version identifiers, previous valid version identifiers that are not less recent than the changed value of the current version identifier. The tenant is provided with a view of the tenant data included in only a latest valid version of each respective record from among all valid versions of the each respective record.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: January 26, 2021
    Assignee: International Business Machines Corporation
    Inventors: Rachel L. Jarvie, Qingyan Wang
  • Patent number: 10567332
    Abstract: A CDN traffic is optimized by a client-side system that maps the servers in the CDN system. Content requests from client devices for domain names are forwarded to servers in the CDN system that may be selected from the map to prevent a cache miss in the a server for a particular request for content.
    Type: Grant
    Filed: March 24, 2017
    Date of Patent: February 18, 2020
    Assignee: salesforce.com, inc.
    Inventors: Shauli Gal, Satish Raghunath, Kartikeya Chandrayana
  • Patent number: 10565190
    Abstract: An index tree search method, by a computer, for searching an index tree included in a database provided by the computer which includes processors executing a plurality of threads and a memory, the index tree search method comprising: a first step of allocating, by the computer, search ranges in the index tree to the plurality of threads; a second step of receiving, by the computer, a search key; a third step of selecting, by the computer, a thread corresponding to the received search key; and a fourth step of searching, by the computer, the index tree with the selected thread using the received search key.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: February 18, 2020
    Assignee: Hitachi, Ltd.
    Inventors: Tomohiro Hanai, Kazutomo Ushijima, Tsuyoshi Tanaka, Hideo Aoki, Atsushi Tomoda
  • Patent number: 10469334
    Abstract: Embodiments of systems and methods for browsing offline and queried content are presented herein. Specifically, embodiments may receive a request for content from a mobile application. Embodiments may also determine whether the requested content is in a cache associated with the mobile application. If it is determined that the content is not in the cache, embodiments may deliver the requested content to the mobile application.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: November 5, 2019
    Assignee: Open Text SA ULC
    Inventors: Frederick Haigh Jowett, Mark Henstridge Williams, Kirwan Lyster, Kevin Laurence Benton
  • Patent number: 10462265
    Abstract: A method and apparatus for providing remote access to an unavailable server is provided. In an embodiment, a proxy server computer receives a request to access data over a network from a client computing device. The proxy server computer identifies a particular server computer that is separate from the client computing device to fulfill the request. The proxy server computer then determines that the particular server computer is unavailable to a client computing device. In response to determining the particular server is unavailable to the client computing device, the proxy server computer maintains a transport layer connection from the client computing device. While maintaining the transport layer connection, the proxy server computer initiates startup of the particular server computer.
    Type: Grant
    Filed: February 17, 2017
    Date of Patent: October 29, 2019
    Assignee: Plex, Inc.
    Inventors: Greg Edmiston, Schuyler Ullman, Elan Feingold, Scott Olechowski
  • Patent number: 10387161
    Abstract: Techniques are disclosed for implementing an extensible, light-weight, flexible (ELF) processing platform that can efficiently capture state information from multiple threads during execution of instructions (e.g., an instance of a game). The ELF processing platform supports execution of multiple threads in a single process for parallel execution of multiple instances of the same or different program code or games. Upon capturing the state information, one or more threads may be executed in the ELF platform to compute one or more actions to perform at any state of execution by each of those threads. The threads can easily access the state information from a shared memory space and use the state information to implement rule-based and/or learning-based techniques for determining subsequent actions for execution for the threads.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: August 20, 2019
    Assignee: Facebook, Inc.
    Inventors: Yuandong Tian, Qucheng Gong, Yuxin Wu
  • Patent number: 10375199
    Abstract: Systems, methods, and non-transitory computer-readable media can determine at least one survey to be presented to users of the social networking system, wherein the survey is targeted to a number of users at each time interval. A uniform distribution of users that may be surveyed is determined, wherein users in the uniform distribution are each assigned a numerical value. A sampling window that references a numerical range that is adjusted upon completion of each time interval is determined, wherein users that have been assigned a numerical value within the numerical range are eligible for the survey.
    Type: Grant
    Filed: December 30, 2015
    Date of Patent: August 6, 2019
    Assignee: Facebook, Inc.
    Inventors: Shiyu Zhao, Matthew K. Choi, Nicholas Scott LaGrow
  • Patent number: 10133675
    Abstract: A data processing apparatus and method are provided for performing address translation in response to a memory access request issued by processing circuitry of the data processing apparatus and specifying a virtual address for a data item. Address translation circuitry performs an address translation process with reference to at least one descriptor provided by at least one page table, in order to produce a modified memory access request specifying a physical address for the data item. The address translation circuitry includes page table walk circuitry configured to generate at least one page table walk request in order to retrieve the at least one descriptor required for the address translation process. In addition, walk ahead circuitry is located in a path between the address translation circuitry and a memory device containing the at least one page table.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: November 20, 2018
    Assignee: ARM Limited
    Inventors: Andreas Hansson, Ali Saidi, Aniruddha Nagendran Udipi, Stephan Diestelhorst
  • Patent number: 10084748
    Abstract: A method for processing a request for MO data using a cache validator (CV) allocated to an MO instance according to an embodiment of the present invention, comprises the steps of: receiving uniform resource identifier (URI) information to identify the MO data of requesting certain MO data of the MO instance from a server; determining whether the URI information includes a first CV; transmitting the requested certain MO data to the server when the URI information does not include the first CV; and transmitting a second CV for the MO instance when the URI information indicates a root node of the MO instance, wherein the MO instance has a tree structure consisting of at least one node; the MO data comprises the name, value and structure of a node included in the MO instance, and the method is performed by a terminal.
    Type: Grant
    Filed: December 20, 2013
    Date of Patent: September 25, 2018
    Assignee: LG ELECTRONICS INC.
    Inventors: Seungkyu Park, Seongyun Kim
  • Patent number: 10079884
    Abstract: Streaming digital content synchronization techniques are described. A response is received to a request to stream the digital content. The response includes a time at which the digital content was last modified (e.g., a last-modified header) and a time at which the response was generated, e.g., a date header. An age is calculated by subtracting the time at which the digital content was last modified, e.g., the last-modified header, from the time at which the response was generated, e.g., the date header. An age describing an amount of time the response spent in one or more caches, if available, is added as part of this age. The time is determined by subtracting the age from a predefined setback time and the stream of the digital content is rendered based at least in part on the determined time.
    Type: Grant
    Filed: March 14, 2016
    Date of Patent: September 18, 2018
    Assignee: Adobe Systems Incorporated
    Inventor: Michael Christopher Thornburgh
  • Patent number: 10039045
    Abstract: An apparatus and method for relaying by a mobile device, the apparatus including: a wireless access node connecting module configured to connect to an external wireless access point through a station node of Wi-Fi, which is inbuilt in the mobile device; a relay instruction sending module configured to send a relay instruction to a Wi-Fi module inbuilt in the mobile device through a P2P node of Wi-Fi, which is inbuilt in the mobile device so that logon information of the mobile device is broadcasted, and one or more external electronic devices connect with the mobile device through the P2P node; a packet forward enabling module configured to enable a packet forward function; and a packet forward configuring module configured to send configuration information of packet forwarding to the Wi-Fi module so that a data packet is forwarded between the station node and the P2P node.
    Type: Grant
    Filed: February 26, 2016
    Date of Patent: July 31, 2018
    Assignees: HISENSE MOBILE COMMUNICATIONS TECHNOLOGY CO., LTD., HISENSE USA CORPORATION, HISENSE INTERNATIONAL CO., LTD.
    Inventors: Zizhi Sun, Bin Zheng, Chuanqing Yang, Shijiao Chen, Linhu Zhao, Shidong Shang, Changsheng Zhou
  • Patent number: 10031847
    Abstract: A processor includes an associative memory including ways organized in an asymmetric tree structure, a replacement control unit including a decision node indicator whose value determines the side of the tree structure to which a next memory element replacement operation is directed, and circuitry to cause, responsive to a miss in the associative memory while the decision node indicator points to the minority side of the tree structure, the decision node indicator to point a majority side of the tree structure, and to determine, responsive to a miss while the decision node indicator points to the majority side of the tree structure, whether or not to cause the decision node indicator to point to the minority side of the tree structure, the determination being dependent on a current replacement weight value. The replacement weight value may be counter-based or a probabilistic weight value.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: July 24, 2018
    Assignee: Intel Corporation
    Inventors: Chunhui Zhang, Robert S. Chappell, Yury N. Ilin
  • Patent number: 9934246
    Abstract: A system and method support a reference store in a distributed computing environment such as a distributed data grid. The system associates a ticket with the reference store, wherein the reference store contains a plurality of references. Furthermore, the system uses the ticket to expose the reference store to one or more consumers in the distributed computing environment. The reference store type is selected in response to the number of references required to be stored and access overhead. Each reference store can be inflated or deflated according to the number of references it contains. Selection of different reference store types allows for reduced memory overhead while still providing acceptable reference retrieval times. The reduction in memory overhead enhances performance and capabilities of a distributed computing environment such as a distributed data grid.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: April 3, 2018
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Harvey Raja, Cameron Purdy, Gene Gleyzer
  • Patent number: 9928057
    Abstract: In one or more embodiments, a method of generating a code by a compiler includes: analyzing a program executed by a processor; analyzing data necessary to execute respective tasks included in the program; determining whether a boundary of the data used by divided tasks is consistent with a management unit of a cache memory based on results of the analyzing; and generating the code for providing a non-cacheable area from which the data to be stored in the management unit including the boundary is not temporarily stored into the cache memory and the code for storing an arithmetic processing result stored in the management unit including the boundary into a non-cacheable area in a case where it is determined that the boundary of the data used by the divided tasks is not consistent with the management unit of the cache memory.
    Type: Grant
    Filed: December 14, 2010
    Date of Patent: March 27, 2018
    Assignee: WASEDA UNIVERSITY
    Inventors: Hironori Kasahara, Keiji Kimura, Masayoshi Mase
  • Patent number: 9800658
    Abstract: According to various embodiments, a server may be provided. The server may include: a first memory configured to store data of a first kind; a second memory configured to store data of a second kind; a wireless transceiver configured to transmit data stored in the first memory and data stored in the second memory, and configured to receive data; and a memory selection circuit configured to determine whether to store the received data in the first memory or to store the received data in the second memory.
    Type: Grant
    Filed: July 30, 2013
    Date of Patent: October 24, 2017
    Assignee: Marvell International Ltd.
    Inventors: Chao Jin, Weiya Xi, Pantelis Sophoclis Alexopoulos, Chun Teck Lim
  • Patent number: 9781211
    Abstract: A storage device is operable to be coupled to a host electronic device. The storage device includes a memory operable to store an operating system, applications and to provide mass storage functionality, a processor operable to run the operating system and execute the applications on the storage device and an interface is operable to couple the storage device to the host electronic device. The interface provides a data communication path and a power communication path between the storage device and the host electronic device. The storage device has a master storage device mode in which the storage device is operable to control at least one slave function of the host electronic device and a slave storage device mode in which at least one slave function of the storage device is controlled by the host electronic device.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: October 3, 2017
    Assignee: Millennium Enterprise Corporation
    Inventors: Thomas Langas, Asbjorn Djupdal, Borgar Ljosland, Torstein Hernes Dybdahl
  • Patent number: 9753855
    Abstract: A method is provided for facilitating operation of a processor core coupled to a first memory containing executable instructions, a second memory faster than the first memory and a third memory faster than the second memory. The method includes examining instructions being filled from the second memory to the third memory, extracting instruction information containing at least branch information; creating a plurality of tracks based on the extracted instruction information; filling at least one or more instructions that possibly be executed by the processor core based on one or more tracks from a plurality of instruction tracks from the first memory to the second memory; filling at least one or more instructions based on one or more tracks from the plurality of tracks from the second memory to the third memory before the processor core executes the instructions, such that the processor core fetches the instructions from the third memory.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: September 5, 2017
    Assignee: Shanghai Xinhao Microelectronics Co., Ltd.
    Inventor: Chenghao Kenneth Lin
  • Patent number: 9720822
    Abstract: In one embodiment, a node coupled to solid state drives (SSDs) of a plurality of storage arrays executes a storage input/output (I/O) stack having a plurality of layers. The node includes a non-volatile random access memory (NVRAM). A first portion of the NVRAM is configured as a write-back cache to store write data associated with a write request and a second portion of the NVRAM is configured as one or more non-volatile logs (NVLogs) to record metadata associated with the write request. The write data is passed from the write-back cache over a first path of the storage I/O stack for storage on a first storage array and the metadata is passed from the one or more NVLogs over a second path of the storage I/O stack for storage on a second storage array, wherein the first path is different from the second path.
    Type: Grant
    Filed: September 16, 2015
    Date of Patent: August 1, 2017
    Assignee: NetApp, Inc.
    Inventor: Jeffrey S. Kimmel
  • Patent number: 9710147
    Abstract: A mobile terminal including a wireless communication unit configured to provide wireless communication; a touchscreen; a memory; and a controller configured to display a clipboard including copied content on the touchscreen, receive a paste selection signal indicating a pasting of the copied content from the clipboard to another location on the touchscreen, determine a property of the other location of the touchscreen, and modify a type of the copied content to correspond to the determined property of the other location when the content is pasted to the other location.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: July 18, 2017
    Assignee: LG ELECTRONICS INC.
    Inventors: Yunmi Kwon, Arim Kwon, Songyi Baek, Sehyun Jung, Eunsoo Jung, Hyemi Jung, Kyunghye Seo
  • Patent number: 9703885
    Abstract: Embodiments disclosed herein provide a high performance content delivery system in which versions of content are cached for servicing web site requests containing the same uniform resource locator (URL). When a page is cached, certain metadata is also stored along with the page. That metadata includes a description of what extra attributes, if any, must be consulted to determine what version of content to serve in response to a request. When a request is fielded, a cache reader consults this metadata at a primary cache address, then extracts the values of attributes, if any are specified, and uses them in conjunction with the URL to search for an appropriate response at a secondary cache address. These attributes may include HTTP request headers, cookies, query string, and session variables. If no entry exists at the secondary address, the request is forwarded to a page generator at the back-end.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: July 11, 2017
    Assignee: Open Text SA ULC
    Inventor: Mark R. Scheevel
  • Patent number: 9645945
    Abstract: Fill partitioning of a shared cache is described. In an embodiment, all threads running in a processor are able to access any data stored in the shared cache; however, in the event of a cache miss, a thread may be restricted such that it can only store data in a portion of the shared cache. The restrictions to storing data may be implemented for all cache miss events or for only a subset of those events. For example, the restrictions may be implemented only when the shared cache is full and/or only for particular threads. The restrictions may also be applied dynamically, for example, based on conditions associated with the cache. Different portions may be defined for different threads (e.g. in a multi-threaded processor) and these different portions may, for example, be separate and non-overlapping. Fill partitioning may be applied to any on-chip cache, for example, a L1 cache.
    Type: Grant
    Filed: January 13, 2014
    Date of Patent: May 9, 2017
    Assignee: Imagination Technologies Limited
    Inventor: Jason Meredith
  • Patent number: 9646673
    Abstract: An address detection circuit includes an address storage unit suitable for receiving an address when an active command is activated, and storing recently inputted N number of addresses; and an address determination unit suitable for determining whether an address currently inputted to the address storage unit is already inputted at least a threshold number of times in each period that the active command is activated M (1?M?N) number of times, based on the N number of addresses stored in the address storage unit.
    Type: Grant
    Filed: November 20, 2013
    Date of Patent: May 9, 2017
    Assignee: SK Hynix Inc.
    Inventors: Chang-Hyun Kim, Choung-Ki Song
  • Patent number: 9558227
    Abstract: Limiting the number of concurrent requests in a database system. Arranging requests to be handled by the database system in at least one queue. Defining a maximum value (SS) of concurrent requests corresponding to the at least one queue. Monitoring at least one queue utilization parameter corresponding to the at least one queue and calculating a performance value based on the at least one queue utilization parameter. Adapting the maximum value (SS) of concurrent requests of the at least one queue dynamically based on the performance value (PF) in order to improve system performance. Limiting the number of concurrent requests of the at least one queue dynamically based on the dynamically adapted maximum value (SS).
    Type: Grant
    Filed: December 7, 2015
    Date of Patent: January 31, 2017
    Assignee: International Business Machines Corporation
    Inventors: Pawel Gocek, Grzegorz K. Lech, Bartlomiej T. Malecki, Jan Marszalek, Joanna Wawrzyczek
  • Patent number: 9524164
    Abstract: A system and method for efficient predicting and processing of memory access dependencies. A computing system includes control logic that marks a detected load instruction as a first type responsive to predicting the load instruction has high locality and is a candidate for store-to-load (STL) data forwarding. The control logic marks the detected load instruction as a second type responsive to predicting the load instruction has low locality and is not a candidate for STL data forwarding. The control logic processes a load instruction marked as the first type as if the load instruction is dependent on an older store operation. The control logic processes a load instruction marked as the second type as if the load instruction is independent on any older store operation.
    Type: Grant
    Filed: August 30, 2013
    Date of Patent: December 20, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Lena E. Olson, Yasuko Eckert, Srilatha Manne
  • Patent number: 9501280
    Abstract: A unified architecture for dynamic generation, execution, synchronization and parallelization of complex instruction formats includes a virtual register file, register cache and register file hierarchy. A self-generating and synchronizing dynamic and static threading architecture provides efficient context switching.
    Type: Grant
    Filed: February 28, 2014
    Date of Patent: November 22, 2016
    Assignee: Soft Machines, Inc.
    Inventor: Mohammad A. Abdallah
  • Patent number: 9280397
    Abstract: A method and apparatus for accelerating a Software Transactional Memory (STM) system is herein described. A data object and metadata for the data object may each be associated with a filter, such as a hardware monitor or ephemerally held filter information. The filter is in a first, default state when no access, such as a read, from the data object has occurred during a pendancy of a transaction. Upon encountering a first access to the metadata, such as a first read, access barrier operations, such as logging of the metadata; setting a read monitor; or updating ephemeral filter information with an ephemeral/buffered store operation, are performed. Upon a subsequent/redundant access to the metadata, such as a second read, access barrier operations are elided to accelerate the subsequent access based on the filter being set to the second state to indicate a previous access occurred.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: March 8, 2016
    Assignee: Intel Corporation
    Inventors: Ali-Reza Adl-Tabatabai, Gad Sheaffer, Bratin Saha, Jan Gray, David Callahan, Burton Smith, Graefe Goetz
  • Patent number: 9146744
    Abstract: Embodiments of the present invention provide a system which executes a load instruction or a store instruction. During operation the system receives a load instruction. The system then determines if an unrestricted entry or a restricted entry in a store queue contains data that satisfies the load instruction. If not, the system retrieves data for the load instruction from a cache.
    Type: Grant
    Filed: May 6, 2008
    Date of Patent: September 29, 2015
    Assignee: ORACLE AMERICA, INC.
    Inventors: Paul Caprioli, Martin Karlsson, Shailender Chaudhry, Gideon N. Levinsky
  • Patent number: 9104684
    Abstract: Embodiments relate to cache handling in a database system. An aspect includes controlling operations of a set of caches in the database system and determining whether a value of a cache quality parameter of a first cache out of the set of caches meets a cache image creation criterion relating to the first cache. Moreover, an aspect includes selecting at least one cache entry from the first cache, if a value of a related cache entry parameter meets a cache entry criterion, and if the value of the cache quality parameter of the first cache exceeds the predefined value of the cache image creation criterion, and creating a cache image based on the selected at least one cache entry and storing the cache image for further use.
    Type: Grant
    Filed: March 22, 2013
    Date of Patent: August 11, 2015
    Assignee: International Business Machines Corporation
    Inventors: Ingo Hotz, Robert Kern, Martin Oberhofer, Mathias Rueck
  • Publication number: 20150149726
    Abstract: A data distribution device includes: a memory configured to store cache data of data to be distributed; and a processor coupled to the memory and configured to: read the cache data from the memory in accordance with a request message received from other devices to distribute the cache data to the other devices, update, when the request message is received, a counter value that gets closer to a given value with time, so as to make the counter value move away from the given value in accordance with a reference value that is a reciprocal of a threshold value of a reception rate of the request message, whether or not to store the cache data being determined based on the reception rate; and discard the cache data in the memory when the counter value becomes the given value.
    Type: Application
    Filed: November 20, 2014
    Publication date: May 28, 2015
    Applicant: FUJITSU LIMITED
    Inventor: Satoshi IMAI
  • Patent number: 9015272
    Abstract: Between a CPU and a communication module, a write buffer, a write control section, a read buffer and a read control section are provided. The CPU directly accesses and the write buffer and the read buffer. By periodically outputting a communication request, the read control section reads data, which the communication module received from other nodes, and transfers the data to the read buffer. The write control section transfers to the communication module the data written in the write buffer as transmission data. In addition, a bypass access control section and an access sequence control section are provided. The bypass access control section controls direct data read and data write between the CPU and the communication module. The access sequence control section controls sequence of accesses of the control sections to the communication module.
    Type: Grant
    Filed: February 23, 2012
    Date of Patent: April 21, 2015
    Assignee: DENSO CORPORATION
    Inventors: Hirofumi Yamamoto, Yuki Horii, Takashi Abe, Shinichirou Taguchi
  • Patent number: 9015417
    Abstract: An access request that includes a combination of a file identifier and an offset value is received. If the page cache does not contain the page indexed by the combination, then the file system is accessed and the offset value is mapped to a disk location. The file system can access a block map to identify the location. A table (e.g., a shared location table) that includes entries (e.g., locations) for pages that are shared by multiple files is accessed. If the aforementioned disk location is in the table, then the requested page is in the page cache and it is not necessary to add the page to the page cache. Otherwise, the page is added to the page cache.
    Type: Grant
    Filed: December 15, 2010
    Date of Patent: April 21, 2015
    Assignee: Symantec Corporation
    Inventors: Mukund Agrawal, Shriram Wankhade
  • Patent number: 9015722
    Abstract: A method of determining a thread from a plurality of threads to execute a task in a multi-processor computer system. The plurality of threads is grouped into at least one subset associated with a cache memory of the computer system. The task has a type determined by a set of instructions. The method obtains an execution history of the subset of plurality of threads and determines a weighting for each of the set of instructions and the set of data, the weightings depending on the type of the task. A suitability of the subset of the threads to execute the task based on the execution history and the determined weightings, is then determined. Subject to the determined suitability of the subset of threads, the method determining a thread from the subset of threads to execute the task using content of the cache memory associated with the subset of threads.
    Type: Grant
    Filed: August 17, 2012
    Date of Patent: April 21, 2015
    Assignee: Canon Kabushiki Kaisha
    Inventors: Ekaterina Stefanov, David Robert James Monaghan, Paul William Morrison
  • Patent number: 8990504
    Abstract: A cache page management method can include paging out a memory page to an input/output controller, paging the memory page from the input/output controller into a real memory, modifying the memory page in the real memory to an updated memory page and purging the memory page paged to the input/output controller.
    Type: Grant
    Filed: July 11, 2011
    Date of Patent: March 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Tara Astigarraga, Michael E. Browne, Joseph Demczar, Eric C. Wieder
  • Patent number: 8966169
    Abstract: A tape recording device, method, and computer program product are provided for performing operations of position movement, reading, and writing on a tape medium, and receiving a series of commands from an upper-layer device. The tape recording device includes a buffer for temporarily storing data related to the reading and an append write, a tape for recording the data stored in the buffer, a reading and writing head for reading data from the tape into the buffer and writing the data onto the tape, control means for reading data from a designated position of the tape and storing the data in the buffer, and for writing the data stored in the buffer onto the tape from a written data end position in response to an append write command, and a non-volatile memory for storing data stored in the buffer in response to an append write command.
    Type: Grant
    Filed: October 21, 2010
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Toshiyuki Shiratori, Kohei Taguchi
  • Patent number: 8966178
    Abstract: Provided are a computer program product, system, and method for managing data in a cache system comprising a first cache, a second cache, and a storage system. A determination is made of tracks stored in the storage system to demote from the first cache. A first stride is formed including the determined tracks to demote. A determination is made of a second stride in the second cache in which to include the tracks in the first stride. The tracks from the first stride are added to the second stride in the second cache. A determination is made of tracks in strides in the second cache to demote from the second cache. The determined tracks to demote from the second cache are demoted.
    Type: Grant
    Filed: January 17, 2012
    Date of Patent: February 24, 2015
    Assignee: International Business Machines Corporation
    Inventors: Kevin J. Ash, Michael T. Benhase, Lokesh M. Gupta, Matthew J. Kalos, Karl A. Nielsen