Patents by Inventor Vijay Balakrishnan

Vijay Balakrishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20170228157
    Abstract: A Solid State Drive (SSD) is disclosed. The SSD may include flash memory to store data and may support a plurality of device streams. A SSD controller may manage reading and writing data to the flash memory, and may store a submission queue and a chunk-to-stream mapper. A flash translation layer may include a receiver to receive a write command, an LBA mapper to map an LBA to a chunk identifier (ID), stream selection logic to select a stream ID based on the chunk ID, a stream ID adder to add the stream ID to the write command, a queuer to place the chunk ID in the submission queue, and background logic to update the chunk-to-stream mapper after the chunk ID is removed from the submission queue.
    Type: Application
    Filed: April 27, 2017
    Publication date: August 10, 2017
    Inventors: Jingpei YANG, Changho CHOI, Rajinikanth PANDURANGAN, Vijay BALAKRISHNAN, Ramaraj PANDIAN
  • Publication number: 20170046089
    Abstract: A method for allocating workloads based on a total cost of ownership (TCO) model includes receiving a workload; estimating a cost for allocating the workload to each disk of disks in a disk pool based on a TCO model; determining a disk among the disks in the disk pool that minimizes a TCO; and allocating the workload to the disk. The TCO model incorporates a plurality of cost factors for estimating costs for each disk in the disk pool for allocating the workload.
    Type: Application
    Filed: April 6, 2016
    Publication date: February 16, 2017
    Inventors: Zhengyu YANG, Mrinmoy GHOSH, Manu AWASTHI, Vijay BALAKRISHNAN
  • Publication number: 20170046098
    Abstract: A method for migrating disks includes: dividing a disk pool including a plurality of disks into a random zone and a sequential zone based on sequentiality and randomness of workloads running on the plurality of disks; monitoring a status of each disk in the disk pool based on a total cost of ownership (TCO); migrating one or more workloads of an overheated disk to an idle disk based on the status of each disk. The overheated disk has a first TCO higher than a migration threshold, and the idle disk has a second TCO lower than an idling threshold.
    Type: Application
    Filed: April 8, 2016
    Publication date: February 16, 2017
    Inventors: Zhengyu YANG, Manu AWASTHI, Mrinmoy GHOSH, Vijay BALAKRISHNAN
  • Publication number: 20120272011
    Abstract: A method for refining multithread software executed on a processor chip of a computer system. The envisaged processor chip has at least one processor core and a memory cache coupled to the processor core and configured to cache at least some data read from memory. The method includes, in logic distinct from the processor core and coupled to the memory cache, observing a sequence of operations of the memory cache and encoding a sequenced data stream that traces the sequence of operations observed.
    Type: Application
    Filed: April 19, 2011
    Publication date: October 25, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Susan Carrie, Vijay Balakrishnan
  • Publication number: 20080091780
    Abstract: A method, system, and computer readable medium comprising instructions to build a configurable electronic forms messenger (EFM) system which includes a configuration module and an EFM module. Each of these modules has its own client and server components and both share a common EFM data store.
    Type: Application
    Filed: August 6, 2007
    Publication date: April 17, 2008
    Inventors: Ramesh Balan, Vijay Balakrishnan, Kuzhali Srinivasan, Nawsheeo Haq
  • Patent number: 7010648
    Abstract: A cache pollution avoidance unit includes a dynamic memory dependency table for storing a dependency state condition between a first load instruction and a sequentially later second load instruction, which may depend on the completion of execution of the first load instruction for operand data. The cache pollution avoidance unit logically ANDs the dependency state condition stored in the dynamic memory dependency table with a cache memory “miss” state condition returned by the cache pollution avoidance unit for operand data produced by the first load instruction and required by the second load instruction. If the logical ANDing is true, memory access to the second load instruction is squashed and the execution of the second load instruction is re-scheduled.
    Type: Grant
    Filed: September 8, 2003
    Date of Patent: March 7, 2006
    Assignee: Sun Microsystems, Inc.
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan
  • Patent number: 6976125
    Abstract: One embodiment of the present invention provides a system for predicting hot spots in a cache memory. Upon receiving a memory operation at the cache, the system determines a target location within the cache for the memory operation. Once the target location is determined, the system increments a counter associated with the target location. If the counter reaches a pre-determined threshold value, the system generates a signal indicating that the target location is a hot spot in the cache memory.
    Type: Grant
    Filed: January 29, 2003
    Date of Patent: December 13, 2005
    Assignee: Sun Microsystems, Inc.
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto
  • Patent number: 6948032
    Abstract: One embodiment of the present invention provides a system that uses a hot spot cache to alleviate the performance problems caused by hot spots in cache memories, wherein the hot spot cache stores lines that are evicted from hot spots in the cache. Upon receiving a memory operation at the cache, the system performs a lookup for the memory operation in both the cache and the hot spot cache in parallel. If the memory operation is a read operation that causes a miss in the cache and a hit in the hot spot cache, the system reads a data line for the read operation from the hot spot cache, writes the data line to the cache, performs the read operation on the data line in the cache, and then evicts the data line from the hot spot cache.
    Type: Grant
    Filed: January 29, 2003
    Date of Patent: September 20, 2005
    Assignee: Sun Microsystems, Inc.
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto
  • Publication number: 20050055533
    Abstract: A cache pollution avoidance unit includes a dynamic memory dependency table for storing a dependency state condition between a first load instruction and a sequentially later second load instruction, which may depend on the completion of execution of the first load instruction for operand data. The cache pollution avoidance unit logically ANDs the dependency state condition stored in the dynamic memory dependency table with a cache memory “miss” state condition returned by the cache pollution avoidance unit for operand data produced by the first load instruction and required by the second load instruction. If the logical ANDing is true, memory access to the second load instruction is squashed and the execution of the second load instruction is re-scheduled.
    Type: Application
    Filed: September 8, 2003
    Publication date: March 10, 2005
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan
  • Publication number: 20040148465
    Abstract: One embodiment of the present invention provides a system that uses a hot spot cache to alleviate the performance problems caused by hot spots in cache memories, wherein the hot spot cache stores lines that are evicted from hot spots in the cache. Upon receiving a memory operation at the cache, the system performs a lookup for the memory operation in both the cache and the hot spot cache in parallel. If the memory operation is a read operation that causes a miss in the cache and a hit in the hot spot cache, the system reads a data line for the read operation from the hot spot cache, writes the data line to the cache, performs the read operation on the data line in the cache, and then evicts the data line from the hot spot cache.
    Type: Application
    Filed: January 29, 2003
    Publication date: July 29, 2004
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto
  • Publication number: 20040148469
    Abstract: One embodiment of the present invention provides a system for predicting hot spots in a cache memory. Upon receiving a memory operation at the cache, the system determines a target location within the cache for the memory operation. Once the target location is determined, the system increments a counter associated with the target location. If the counter reaches a pre-determined threshold value, the system generates a signal indicating that the target location is a hot spot in the cache memory.
    Type: Application
    Filed: January 29, 2003
    Publication date: July 29, 2004
    Inventors: Sudarshan Kadambi, Vijay Balakrishnan, Wayne I. Yamamoto