Patents Examined by Masud Khan
  • Patent number: 9921979
    Abstract: Methods, systems, and computer program products for executing a protected function are provided. A computer-implemented method may include storing a first virtual machine function instruction as the last instruction on the first trampoline page that is executable to configure access privileges according to a trampoline view, storing a page table setup instruction on the second trampoline page, and storing a second virtual machine function instruction as a last instruction on the second trampoline page that is executable to configure access privileges according to a protected view.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: March 20, 2018
    Assignee: Red Hat Israel, Ltd.
    Inventors: Michael Tsirkin, Paolo Bonzini
  • Patent number: 9916189
    Abstract: In the described embodiments, entities in a computing device selectively write specified values to a lock variable in a local cache and one or more lower levels of a memory hierarchy to enable multiple entities to enable the concurrent execution of corresponding critical sections of program code that are protected by a same lock.
    Type: Grant
    Filed: September 6, 2014
    Date of Patent: March 13, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Martin T. Pohlack, Stephan Diestelhorst
  • Patent number: 9892049
    Abstract: A semiconductor device includes a processor, a memory, a plurality of tags, a plurality of ways each of which can store a plurality of data of consecutive addresses of the memory in which a tag value stored in each tag of the plurality of tags is taken as a reference address, and a cache controller configured to determine whether a second way has an address change direction flag matching an address change direction flag of the first way and has a tag value continuous with a tag value of the first way in a direction opposite to a direction that the address change direction flag of the first way indicates and prefetch data indicated by a tag value continuous with the tag value of the first way in the direction that the address change direction flag indicates to the second way based on the result of the determination.
    Type: Grant
    Filed: October 31, 2012
    Date of Patent: February 13, 2018
    Assignee: Renesas Electronics Corporation
    Inventor: Makoto Shindo
  • Patent number: 9880935
    Abstract: A processor writes input data to a cache line of a shared cache, wherein the input data is ready to be operated on by an accelerator. It then notifies an accelerator that the input data is ready to be processed. The processor then determines that output data of the accelerator is ready to be consumed, the output data being located at the cache line or an additional cache line of the shared cache, wherein the cache line or the additional cache line comprises a set first flag that indicates the cache line or the additional cache line was modified by the accelerator and that prevents the output data from being removed from the cache line or the additional cache line until the output data is read by the processor. The processor reads and processes the output data from the cache line or the additional cache.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: January 30, 2018
    Assignee: Intel Corporation
    Inventors: Pinkesh Shah, Herbert Hum, Lingdan Zeng
  • Patent number: 9880936
    Abstract: A system includes a database that stores data on one or more memory devices and a business object layer that receives a request for data associated with a user stored on the database. The system includes a first cache that reads and stores the requested data from the database in response to the request from the business object layer, where the first cache is partitioned into different segments and the different segments are stored across multiple different computing devices. The system includes a second cache that reads and stores the requested data from the first cache. The business object layer filters and applies business logic to the data before the second cache reads the requested data from the first cache. The second cache is stored on a single computing device that received the request. The business object layer delivers the requested data from the second cache.
    Type: Grant
    Filed: October 21, 2014
    Date of Patent: January 30, 2018
    Assignee: Sybase, Inc.
    Inventors: Pranav Athalye, Srinivas Sudhakaran
  • Patent number: 9864690
    Abstract: A processor in a multi-processor configuration is configured perform dynamic address translation from logical addresses to real address and to detect memory conflicts for shared logical memory in transactional memory based on logical (virtual) addresses comparisons.
    Type: Grant
    Filed: September 15, 2015
    Date of Patent: January 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9864692
    Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.
    Type: Grant
    Filed: August 12, 2015
    Date of Patent: January 9, 2018
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9785352
    Abstract: An application located in one or more first memory regions is executed. The application has a separate modified portion, which is located in one or more second memory regions. A request is obtained to access one of a first memory region or a second memory region, the request including an address of a first type. Based on obtaining the request, the address is translated to another address. The other address is of a second type and indicates the first memory region or the second memory region. The translating is based on an attribute associated with the address, in which the attribute is used to select information from a plurality of information concurrently available for selection. The plurality of information provide multiple addresses of the second type, one of which is the other address. The other address is used to access the first memory region or the second memory region.
    Type: Grant
    Filed: September 12, 2014
    Date of Patent: October 10, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9772944
    Abstract: A higher level shared cache of a hierarchical cache of a multi-processor system utilizes transaction identifiers to manage memory conflicts in corresponding transactions. The higher level cache is shared with two or more processors. Transaction indicators are set in the higher level cache corresponding to the cache lines being accessed. The transaction aborts if a memory conflict with the transaction's cache lines from another transaction is detected.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: September 26, 2017
    Assignee: International Business Machines Corporation
    Inventors: Fadi Y. Busaba, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum
  • Patent number: 9753735
    Abstract: A data processing system includes a processing pipeline for the parallel execution of a plurality of threads. An issue controller issues threads to the processing pipeline. A stall manager controls the stalling and unstalling of threads when a cache miss occurs within a cache memory. The issue controller issues the threads to the processing pipeline in accordance with both a main sequence and a pilot sequence. The pilot sequence is followed such that threads within the pilot sequence are issued at least a given time ahead of their neighbors within a main sequence. The given time corresponds approximately to the latency associated with a cache miss. The threads may be arranged in groups corresponding to blocks of pixels for processing within a graphics processing unit.
    Type: Grant
    Filed: January 14, 2015
    Date of Patent: September 5, 2017
    Assignee: ARM Limited
    Inventors: Andreas Due Engh-Halstvedt, Ian Victor Devereux, David Bermingham, Jakob Axel Fries, Oskar Lars Flordal
  • Patent number: 9740623
    Abstract: A processing device comprises a processing device cache and a cache controller. The cache controller initiates a cache line eviction process and determines determine an object liveness value associated with a cache line in the processing device cache. The cache controller applies the object liveness value to a cache line eviction policy and evicts the cache line from the processing device cache based on the object liveness value and the cache line eviction policy.
    Type: Grant
    Filed: March 15, 2013
    Date of Patent: August 22, 2017
    Assignee: Intel Corporation
    Inventors: Christopher J. Hughes, Daehyun Kim, Jong Soo Park, Richard M Yoo, Ganesh Bikshandi
  • Patent number: 9720837
    Abstract: A computer allows non-cacheable loads or stores in a hardware transactional memory environment. Transactional loads or stores, by a processor, are monitored in a cache for TX conflicts. The processor accepts a request to execute a transactional execution (TX) transaction. Based on processor execution of a cacheable load or store instruction for loading or storing first memory data of the transaction, the computer can perform a cache miss operation on the cache. Based on processor execution of a non-cacheable load instruction for loading second memory data of the transaction, the computer can not-perform the cache miss operation on the cache based on a cache line associated with the second memory data being not-cached, and load an address of the second memory data into a non-cache-monitor. The TX transaction can be aborted based on the non-cache monitor detecting a memory conflict from another processor.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: August 1, 2017
    Assignee: International Business Machines Corporation
    Inventors: Jonathan D. Bradbury, Michael Karl Gschwind, Valentina Salapura, Chung-Lung K. Shum
  • Patent number: 9703718
    Abstract: Managing cache evictions during transactional execution of a process. Based on initiating transactional execution of a memory data accessing instruction, memory data is fetched from a memory location, the memory data to be loaded as a new line into a cache entry of the cache. Based on determining that a threshold number of cache entries have been marked as read-set cache lines, determining whether a cache entry that is a read-set cache line can be replaced by identifying a cache entry that is a read-set cache line for the transaction that contains memory data from a memory address within a predetermined non-conflict address range. Then invalidating the identified cache entry of the transaction. Then loading the fetched memory data into the identified cache entry, and then marking the identified cache entry as a read-set cache line of the transaction.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: July 11, 2017
    Assignee: International Business Machines Corporation
    Inventors: Dan F. Greiner, Michael Karl Gschwind, Eric M. Schwarz, Chung-Lung K. Shum, Timothy J. Slegel
  • Patent number: 9672113
    Abstract: A backup system comprises a tape backup storage storing a set of tape backup data, a snapshot backup storage storing a nearest snapshot, and a processor. The processor is configured to determine the nearest snapshot, wherein a snapshot time of the nearest snapshot is nearest in time to a backup time, and determine the set of tape backup data, wherein the set of tape backup data and the nearest snapshot enable recovery of a backup data.
    Type: Grant
    Filed: March 25, 2014
    Date of Patent: June 6, 2017
    Assignee: EMC IP Holding Company LLC
    Inventors: Manuel Rodriques, John Rokicki
  • Patent number: 9652270
    Abstract: Embodiments of apparatus and methods for virtualized computing are described. In embodiments, an apparatus may include one of more processor cores and a cache coupled to the one or more processor cores. The apparatus may further include a hypervisor operated by the one or more processor cores to manage operation of virtual machines on the apparatus, including selecting a part of the cache to store selected data or code of the hypervisor or one of the virtual machines, and locking the part of the cache to prevent the selected data or code from being evicted from the cache. Other embodiments may be described and/or claimed.
    Type: Grant
    Filed: March 21, 2014
    Date of Patent: May 16, 2017
    Assignee: Intel Corporation
    Inventors: Alexander Komarov, Anton Langebner
  • Patent number: 9652387
    Abstract: A cache system stores a number of different datasets. The cache system includes a number of cache units, each in a state associated with one of the datasets. In response to determining that a hit ratio of a cache unit drops below a threshold, the state of the cache unit is changed and the dataset is replaced with that associated with the new state.
    Type: Grant
    Filed: January 3, 2014
    Date of Patent: May 16, 2017
    Assignee: Red Hat, Inc.
    Inventors: Filip EliĆ”{hacek over (s)}, Filip Nguyen
  • Patent number: 9575761
    Abstract: A semiconductor device includes a memory for storing a plurality of instructions therein, an instruction queue which temporarily stores the instructions fetched from the memory therein, a central processing unit which executes the instruction supplied from the instruction queue, an instruction cache which stores therein the instructions executed in the past by the central processing unit, and a control circuit which controls fetching of each instruction. When the central processing unit executes a branch instruction, and an instruction of a branch destination is being in the instruction cache and an instruction following the instruction of the branch destination is stored in the instruction queue, the control circuit causes the instruction queue to fetch the instruction of the branch destination from the instruction cache and causes the instruction queue not to fetch the instruction following the instruction of the branch destination.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: February 21, 2017
    Assignee: Renesas Electronics Corporation
    Inventor: Isao Kotera
  • Patent number: 9569115
    Abstract: An application located in one or more first memory regions is executed. The application has a separate modified portion, which is located in one or more second memory regions. A request is obtained to access one of a first memory region or a second memory region, the request including an address of a first type. Based on obtaining the request, the address is translated to another address. The other address is of a second type and indicates the first memory region or the second memory region. The translating is based on an attribute associated with the address, in which the attribute is used to select information from a plurality of information concurrently available for selection. The plurality of information provide multiple addresses of the second type, one of which is the other address. The other address is used to access the first memory region or the second memory region.
    Type: Grant
    Filed: March 31, 2014
    Date of Patent: February 14, 2017
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventor: Michael K. Gschwind
  • Patent number: 9354979
    Abstract: A mechanism is provided in a data processing system for asynchronous replication. The mechanism creates a record in a write log in a host computing device for a write command and marking the record as uncommitted. The mechanism maintains a copy of data to be written by the write command at the host computing device. The mechanism issues the write command from the host computing device to a primary storage controller at the primary storage site. Responsive to receiving an acknowledgement from the primary storage controller that the data have been written to the primary storage site, the mechanism marks the record as unreplicated. Responsive to receiving an acknowledgement from the primary storage controller that the data have been replicated to a secondary storage site, the mechanism erases the record in the write log and deleting the copy of data.
    Type: Grant
    Filed: February 7, 2014
    Date of Patent: May 31, 2016
    Assignee: International Business Machines Corporation
    Inventors: Rahul M. Fiske, Shrikant V. Karve, Sarvesh S. Patel, Subhojit Roy