Patents by Inventor Arkaprava Basu

Arkaprava Basu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180219816
    Abstract: Various messaging systems and methods are disclosed for meeting invitation management. In one aspect, a method of messaging is provided that includes generating a message to invite one or more invitees to a meeting. The message includes an assertion to suppress an auto-responder of the one or more invitees. The message is sent to the one or more invitees. The assertion suppresses the auto-responder of the one or more invitees.
    Type: Application
    Filed: January 27, 2017
    Publication date: August 2, 2018
    Inventors: Andrew G. Kegel, Arkaprava Basu
  • Patent number: 10019283
    Abstract: A processing device includes a first memory that includes a context buffer. The processing device also includes a processor core to execute threads based on context information stored in registers of the processor core and a memory controller to selectively move a subset of the context information between the context buffer and the registers based on one or more latencies of the threads.
    Type: Grant
    Filed: June 22, 2015
    Date of Patent: July 10, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Dmitri Yudanov, Sergey Blagodurov, Arkaprava Basu, Sooraj Puthoor, Joseph L. Greathouse
  • Patent number: 10019377
    Abstract: The described embodiments include a computing device with two or more types of processors and a memory that is shared between the two or more types of processors. The computing device performs operations for handling cache coherency between the two or more types of processors. During operation, the computing device sets a cache coherency indicator in metadata in a page table entry in a page table, the page table entry information about a page of data that is stored in the memory. The computing device then uses the cache coherency indicator to determine operations to be performed when accessing data in the page of data in the memory. For example, the computing device can use the coherency indicator to determine whether a coherency operation is to be performed when a processor of a given type accesses data in the page of data in the memory.
    Type: Grant
    Filed: May 23, 2016
    Date of Patent: July 10, 2018
    Assignee: ADVANCED MICRO DEVICES, INC.
    Inventors: Arkaprava Basu, Bradford M. Beckmann, Shuai Che, Sooraj Puthoor
  • Patent number: 9983655
    Abstract: A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: May 29, 2018
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Mitesh R. Meswani, David A. Roberts, Dmitri Yudanov, Arkaprava Basu, Sergey Blagodurov
  • Publication number: 20180107598
    Abstract: Cluster manager functional blocks perform operations for migrating pages in portions in corresponding migration clusters. During operation, each cluster manager keeps an access record that includes information indicating accesses of pages in the portions in the corresponding migration cluster. Based on the access record and one or more migration policies, each cluster manager migrates pages between the portions in the corresponding migration cluster.
    Type: Application
    Filed: October 17, 2016
    Publication date: April 19, 2018
    Inventors: Andreas Prodromou, Mitesh R. Meswani, Arkaprava Basu, Nuwan S. Jayasena, Gabriel H. Loh
  • Publication number: 20180088858
    Abstract: A processing apparatus is provided that includes NVRAM and one or more processors configured to process a first set and a second set of instructions according to a hierarchical processing scope and process a scoped persistence barrier residing in the program after the first instruction set and before the second instruction set. The barrier includes an instruction to cause first data to persist in the NVRAM before second data persists in the NVRAM. The first data results from execution of each of the first set of instructions processed according to the one hierarchical processing scope. The second data results from execution of each of the second set of instructions processed according to the one hierarchical processing scope. The processing apparatus also includes a controller configured to cause the first data to persist in the NVRAM before the second data persists in the NVRAM based on the scoped persistence barrier.
    Type: Application
    Filed: September 23, 2016
    Publication date: March 29, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Mitesh R. Meswani, Dibakar Gope, Sooraj Puthoor
  • Publication number: 20180069767
    Abstract: Techniques described herein improve processor performance in situations where a large number of system service requests are being received from other devices. More specifically, upon detecting that certain operating conditions that indicate a processor slowdown are present, the processor performs one or more system service adjustment techniques. These techniques include throttling (reducing the rate of handling) of such requests, coalescing (grouping multiple requests into a single group) the requests, disabling microarchitctural structures (such as caches or branch prediction units) or updates to those structures, and prefetching data for or pre-performing these requests. Each of these adjustment techniques helps to reduce the number of and/or workload associated with servicing requests for system services.
    Type: Application
    Filed: September 6, 2016
    Publication date: March 8, 2018
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Joseph L. Greathouse, Guru Prasadh V. Venkataramani, Jan Vesely
  • Publication number: 20170371720
    Abstract: A method and processing apparatus for accelerating program processing is provided that includes a plurality of processors configured to process a plurality of tasks of a program and a controller. The controller is configured to determine, from the plurality of tasks being processed by the plurality of processors, a task being processed on a first processor to be a lagging task causing a delay in execution of one or more other tasks of the plurality of tasks. The controller is further configured to provide the determined lagging task to a second processor to be executed by the second processor to accelerate execution of the lagging task.
    Type: Application
    Filed: June 23, 2016
    Publication date: December 28, 2017
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Arkaprava Basu, Dmitri Yudanov, David A. Roberts, Mitesh R. Meswani, Sergey Blagodurov
  • Publication number: 20170337136
    Abstract: The described embodiments include a computing device with two or more types of processors and a memory that is shared between the two or more types of processors. The computing device performs operations for handling cache coherency between the two or more types of processors. During operation, the computing device sets a cache coherency indicator in metadata in a page table entry in a page table, the page table entry information about a page of data that is stored in the memory. The computing device then uses the cache coherency indicator to determine operations to be performed when accessing data in the page of data in the memory. For example, the computing device can use the coherency indicator to determine whether a coherency operation is to be performed when a processor of a given type accesses data in the page of data in the memory.
    Type: Application
    Filed: May 23, 2016
    Publication date: November 23, 2017
    Inventors: Arkaprava Basu, Bradford M. Beckmann, Shuai Che, Sooraj Puthoor
  • Publication number: 20170277634
    Abstract: The described embodiments include a computing device with two or more translation lookaside buffers (TLB) that performs operations for handling entries in the TLBs. During operation, the computing device maintains lease values for entries in the TLBs, the lease values representing times until leases for the entries expire, wherein a given entry in the TLB is invalid when the associated lease has expired. The computing device uses the lease value to control operations that are allowed to be performed using information from the entries in the TLBs. In addition, the computing device maintains, in a page table, longest lease values for page table entries indicating when corresponding longest leases for entries in TLBs expire. The longest lease values are used to determine when and if a TLB shootdown is to be performed.
    Type: Application
    Filed: March 25, 2016
    Publication date: September 28, 2017
    Inventors: Arkaprava Basu, Mark H. Oskin, Gabriel H. Loh, Andrew G. Kegel, David S. Christie, Kevin J. McGrath
  • Publication number: 20170277639
    Abstract: The described embodiments include a computing device with two or more translation lookaside buffers (TLB). During operation, the computing device updates an entry in the TLB based on a virtual address to physical address translation and metadata from a page table entry that were acquired during a page table walk. The computing device then computes, based on a lease length expression, a lease length for the entry in the TLB. Next, the computing device sets, for the entry in the TLB, a lease value to the lease length, wherein the lease value represents a time until a lease for the entry in the TLB expires, wherein the entry in the TLB is invalid when the associated lease has expired. The computing device then uses the lease value to control operations that are allowed to be performed using information from the entry in the TLB.
    Type: Application
    Filed: November 25, 2016
    Publication date: September 28, 2017
    Inventors: Amro Awad, Sergey Blagodurov, Arkaprava Basu, Mark H. Oskin, Gabriel H. Loh, Andrew G. Kegel, David S. Christie, Kevin J. McGrath
  • Publication number: 20170168546
    Abstract: A method and apparatus for performing inter-lane power management includes de-energizing one or more execution lanes upon a determination that the one or more execution lanes are to be predicated. Energy from the predicated execution lanes is redistributed to one or more active execution lanes.
    Type: Application
    Filed: December 9, 2015
    Publication date: June 15, 2017
    Applicant: Advanced Micro Devices, Inc.
    Inventors: Mitesh R. Meswani, David A. Roberts, Dmitri Yudanov, Arkaprava Basu, Sergey Blagodurov
  • Patent number: 9547603
    Abstract: A memory management unit for I/O devices uses page table entries to translate virtual addresses to physical addresses. The page table entries include removal rules allowing the I/O memory management unit to delete page table entries without CPU involvement significantly reducing the CPU overhead involved in virtualized I/O data transactions.
    Type: Grant
    Filed: August 28, 2013
    Date of Patent: January 17, 2017
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Arkaprava Basu, Mark D. Hill, Michael M. Swift
  • Publication number: 20160371082
    Abstract: A processing device includes a first memory that includes a context buffer. The processing device also includes a processor core to execute threads based on context information stored in registers of the processor core and a memory controller to selectively move a subset of the context information between the context buffer and the registers based on one or more latencies of the threads.
    Type: Application
    Filed: June 22, 2015
    Publication date: December 22, 2016
    Inventors: Dmitri Yudanov, Sergey Blagodurov, Arkaprava Basu, Sooraj Puthoor, Joseph L. Greathouse
  • Patent number: 9158704
    Abstract: A computer system using virtual memory provides hybrid memory access either through a conventional translation between virtual memory and physical memory using a page table possibly with a translation lookaside buffer, or a high-speed translation using a fixed offset value between virtual memory and physical memory. Selection between these modes of access may be encoded into the address space of virtual memory eliminating the need for a separate tagging operation of specific memory addresses.
    Type: Grant
    Filed: January 24, 2013
    Date of Patent: October 13, 2015
    Assignee: Wisconsin Alumni Research Foundation
    Inventors: Arkaprava Basu, Mark Donald Hill, Michael Mansfield Swift
  • Publication number: 20150067296
    Abstract: A memory management unit for 110 devices uses page table entries to translate virtual addresses to physical addresses. The page table entries include removal rules allowing the I/O memory management unit to delete page table entries without CPU involvement significantly reducing the CPU overhead involved in virtualized I/O data transactions.
    Type: Application
    Filed: August 28, 2013
    Publication date: March 5, 2015
    Applicant: Wisconsin Alumni Research Foundation
    Inventors: Arkaprava Basu, Mark D. Hill, Michael M. Swift
  • Patent number: 8812786
    Abstract: A system and method of providing directory cache coherence are disclosed. The system and method may include tracking the coherence state of at least one cache block contained within a region using a global directory, providing at least one region level sharing information about the least one cache block in the global directory, and providing at least one block level sharing information about the at least one cache block in the global directory. The tracking of the provided at least one region level sharing information and the provided at least one block level sharing information may organize the coherence state of the at least one cache block and the region.
    Type: Grant
    Filed: October 18, 2011
    Date of Patent: August 19, 2014
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Bradfod M. Beckmann, Arkaprava Basu, Steven K. Reinhardt
  • Publication number: 20140208064
    Abstract: A computer system using virtual memory provides hybrid memory access either through a conventional translation between virtual memory and physical memory using a page table possibly with a translation lookaside buffer, or a high-speed translation using a fixed offset value between virtual memory and physical memory. Selection between these modes of access may be encoded into the address space of virtual memory eliminating the need for a separate tagging operation of specific memory addresses.
    Type: Application
    Filed: January 24, 2013
    Publication date: July 24, 2014
    Applicant: Wisconsin Alumni Research Foundation
    Inventors: Arkaprava Basu, Mark Donald Hill, Michael Mansfield Swift
  • Publication number: 20130097385
    Abstract: A system and method of providing directory cache coherence are disclosed. The system and method may include tracking the coherence state of at least one cache block contained within a region using a global directory, providing at least one region level sharing information about the least one cache block in the global directory, and providing at least one block level sharing information about the at least one cache block in the global directory. The tracking of the provided at least one region level sharing information and the provided at least one block level sharing information may organize the coherence state of the at least one cache block and the region.
    Type: Application
    Filed: October 18, 2011
    Publication date: April 18, 2013
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Bradford M. Beckmann, Arkaprava Basu, Steven K. Reinhardt
  • Publication number: 20130073811
    Abstract: A system and method for region privatization in a directory-based cache coherence system is disclosed. The system and method includes receiving a request from a requesting node for at least one block in a region, allocating a new entry for the region based on the request for the block, requesting from the memory controller the data for the region be sent to the requesting node, receiving a subsequent request for a block within the region, determining that any blocks of the region that are cached are also cached at the requesting node, and privatizing the region at the requesting node.
    Type: Application
    Filed: September 16, 2011
    Publication date: March 21, 2013
    Applicant: ADVANCED MICRO DEVICES, INC.
    Inventors: Bradford M. Beckmann, Arkaprava Basu, Steven K. Reinhardt