Patents by Inventor Jitesh Krishnan

Jitesh Krishnan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240394083
    Abstract: The techniques disclosed herein enable a guest operating system (OS) to access and use a color space conversion component on a host OS. The guest OS provides, via an application programming interface, a request for the host OS to generate media data in a color space format that is used by the guest OS. To generate the media data, the host OS uses a color space conversion component on the host OS, which is more performant than a corresponding color space conversion component on the guest OS because the color space conversion component on the host OS has access to hardware-accelerated functionality. Accordingly, the color space conversion component on the host OS converts media data into the color space format that is used by the guest OS, and stores the media data in memory that is accessible to the guest OS.
    Type: Application
    Filed: May 26, 2023
    Publication date: November 28, 2024
    Inventors: Anton Victor POLINGER, Marcin STANKIEWICZ, Isuru Chamara PATHIRANA, Glenn Frederick EVANS, Matthew R. WOZNIAK, Sang CHOE, Jitesh KRISHNAN, Naveen THUMPUDI
  • Patent number: 10452581
    Abstract: Memory descriptor list caching and pipeline processing techniques are described. In one or more examples, a method is configured to increase efficiency of buffer usage within a pipeline of a computing device. The method includes creation of a buffer in memory of the computing device and caching of a memory descriptor list by the computing device that describes the buffer in a buffer information cache and has associated therewith a handle that acts as a lookup to the memory descriptor list. The method also includes passing the handle through the pipeline of the computing device for processing of data within the buffer by one or more stages of the pipeline such that access to the data is obtained by the one or more stages by using the handle as the lookup as part of a call to obtain the memory descriptor list for the buffer from the buffer information cache.
    Type: Grant
    Filed: October 25, 2017
    Date of Patent: October 22, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mei L. Wilson, Jitesh Krishnan, Sathyanarayanan Karivaradaswamy
  • Publication number: 20180173656
    Abstract: Memory descriptor list caching and pipeline processing techniques are described. In one or more examples, a method is configured to increase efficiency of buffer usage within a pipeline of a computing device. The method includes creation of a buffer in memory of the computing device and caching of a memory descriptor list by the computing device that describes the buffer in a buffer information cache and has associated therewith a handle that acts as a lookup to the memory descriptor list. The method also includes passing the handle through the pipeline of the computing device for processing of data within the buffer by one or more stages of the pipeline such that access to the data is obtained by the one or more stages by using the handle as the lookup as part of a call to obtain the memory descriptor list for the buffer from the buffer information cache.
    Type: Application
    Filed: October 25, 2017
    Publication date: June 21, 2018
    Inventors: Mei L. Wilson, Jitesh Krishnan, Sathyanarayanan Karivaradaswamy
  • Patent number: 9817776
    Abstract: Memory descriptor list caching and pipeline processing techniques are described. In one or more examples, a method is configured to increase efficiency of buffer usage within a pipeline of a computing device. The method includes creation of a buffer in memory of the computing device and caching of a memory descriptor list by the computing device that describes the buffer in a buffer information cache and has associated therewith a handle that acts as a lookup to the memory descriptor list. The method also includes passing the handle through the pipeline of the computing device for processing of data within the buffer by one or more stages of the pipeline such that access to the data is obtained by the one or more stages by using the handle as the lookup as part of a call to obtain the memory descriptor list for the buffer from the buffer information cache.
    Type: Grant
    Filed: February 19, 2015
    Date of Patent: November 14, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mei L. Wilson, Jitesh Krishnan, Sathyanarayanan Karivaradaswamy
  • Publication number: 20160210159
    Abstract: User mode driver extension techniques are described. In one or more implementations, a computing device implements techniques that promote stability of execution of drivers performed by a computing device including stream preprocessing. The computing device includes a processing system to execute an operating system using a kernel mode and a user mode and memory configured to maintain instructions stored thereon that are executable by the processing system. The instructions include the operating system and a device driver and a user mode driver extension that correspond to a camera to support a communicative coupling between the camera and the operating system. The device driver is executable within the kernel mode and the user mode driver extension executable within the user mode. The user mode driver extension is configured to preprocess streams originated by the camera before processing by a camera pipeline of the operating system.
    Type: Application
    Filed: February 19, 2015
    Publication date: July 21, 2016
    Inventors: Mei L. Wilson, Jitesh Krishnan, Sathyanarayanan Karivaradaswamy
  • Publication number: 20160210233
    Abstract: Memory descriptor list caching and pipeline processing techniques are described. In one or more examples, a method is configured to increase efficiency of buffer usage within a pipeline of a computing device. The method includes creation of a buffer in memory of the computing device and caching of a memory descriptor list by the computing device that describes the buffer in a buffer information cache and has associated therewith a handle that acts as a lookup to the memory descriptor list. The method also includes passing the handle through the pipeline of the computing device for processing of data within the buffer by one or more stages of the pipeline such that access to the data is obtained by the one or more stages by using the handle as the lookup as part of a call to obtain the memory descriptor list for the buffer from the buffer information cache.
    Type: Application
    Filed: February 19, 2015
    Publication date: July 21, 2016
    Inventors: Mei L. Wilson, Jitesh Krishnan, Sathyanarayanan Karivaradaswamy