Patents by Inventor Miguel Comparan

Miguel Comparan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10255891
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data and multiple LSR processing engines.
    Type: Grant
    Filed: April 12, 2017
    Date of Patent: April 9, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryan Scott Haraden, Tolga Ozguner, Adam James Muff, Jeffrey Powers Bradford, Christopher Jon Johnson, Gene Leung, Miguel Comparan
  • Patent number: 10241470
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data.
    Type: Grant
    Filed: May 15, 2018
    Date of Patent: March 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Gene Leung, Jeffrey Powers Bradford, Adam James Muff, Miguel Comparan, Ryan Scott Haraden, Christopher Jon Johnson
  • Patent number: 10242654
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: March 26, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Jeffrey Powers Bradford, Miguel Comparan, Gene Leung, Adam James Muff, Ryan Scott Haraden, Christopher Jon Johnson
  • Publication number: 20190050149
    Abstract: Techniques for controlling access to a memory are provided. The techniques may include receiving and storing output pixel data in a buffer, providing the stored output pixel data to a display controller, receiving stored output pixel data from the buffer at the display controller, switching to a second operating mode state based at least on an amount of available data in the buffer being less than or equal to a threshold, identifying a portion of the image data stored in a memory device for use in generating output pixel data for an updated image, and, in response to operating in the second operating mode, generating the output pixel data without issuing a memory read command via an interconnect to retrieve the portion of the initial image while operating in the second operating mode, and providing the output pixel data to the buffer.
    Type: Application
    Filed: October 8, 2018
    Publication date: February 14, 2019
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Tolga OZGUNER, Ishan Jitendra BHATT, Miguel COMPARAN, Ryan Scott HARADEN, Jeffrey Powers BRADFORD, Gene LEUNG
  • Publication number: 20190051229
    Abstract: Techniques for post-rendering image transformation including outputting an image frame including a plurality of first pixels by sequentially generating and outputting multiple color component fields including a first color component field and a second color component field by applying one or more two-dimensional (2D) image transformations to at least one portion of the plurality of source pixels by first, second, and third image transformation pipelines, to generate transformed pixel color data for the first color component field and the second color component field.
    Type: Application
    Filed: August 14, 2017
    Publication date: February 14, 2019
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Tolga OZGUNER, Miguel COMPARAN, Christopher Jon JOHNSON, Jeffrey Powers BRADFORD
  • Publication number: 20190034208
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Application
    Filed: October 2, 2018
    Publication date: January 31, 2019
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson, III
  • Publication number: 20180322688
    Abstract: Systems and methods for multistage post-rendering image transformation are provided. The system may include a transform generation module arranged to dynamically generate an image transformation. The system may include a transform data generation module arranged to generate first and second transformation data by applying the generated image transformation for first and second sampling positions and storing the transformation data in a memory. The system may include a first image transformation stage that selects the first and second transformation data for a destination image position and calculates an estimated source image position based on the selected first and second transformation data.
    Type: Application
    Filed: May 3, 2017
    Publication date: November 8, 2018
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Tolga OZGUNER, Miguel COMPARAN, Ryan Scott HARADEN, Jeffrey Powers BRADFORD
  • Patent number: 10114652
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Grant
    Filed: April 12, 2016
    Date of Patent: October 30, 2018
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson, III
  • Publication number: 20180301125
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data and multiple LSR processing engines.
    Type: Application
    Filed: April 12, 2017
    Publication date: October 18, 2018
    Inventors: Ryan Scott Haraden, Tolga Ozguner, Adam James Muff, Jeffrey Powers Bradford, Christopher Jon Johnson, Gene Leung, Miguel Comparan
  • Patent number: 10095408
    Abstract: Systems and methods for controlling access to a memory are provided. The system may include a buffer to store output data generated by a processing module, and provide the output data to a real-time module, and a buffer monitoring circuit to output an underflow approaching state indication in response to an amount of available data in the buffer being less than or equal to a threshold. The system may include a memory access module arranged to receive memory requests issued by the processing module, and configured to, while operating in a first mode, respond to memory requests with corresponding data retrieved from the memory, switch to operating in a second mode in response to receiving the underflow approaching state indication, and in response to operating in the second mode, respond to memory requests indicating the memory access module did not attempt to retrieve corresponding data from the memory.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: October 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Ishan Jitendra Bhatt, Miguel Comparan, Ryan Scott Haraden, Jeffrey Powers Bradford, Gene Leung
  • Publication number: 20180276824
    Abstract: Optimizations are provided for late stage reprojection processing for a multi-layered scene. A scene is generated, which is based on a predicted pose of a portion of a computer system. A sub-region is identified within one of the layers and is isolated from the other regions in the scene. Thereafter, late stage reprojection processing is applied to that sub-region selectively/differently than other regions in the scene that do not undergo the same late state reprojection processing.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Ryan Scott Haraden, Jeffrey Powers Bradford, Miguel Comparan, Adam James Muff, Gene Leung, Tolga Ozguner
  • Publication number: 20180275748
    Abstract: Optimizations are provided for late stage reprojection processing for a multi-layered scene. A multi-layered scene is generated. Late stage reprojection processing is applied to a first layer and different late stage reprojection processing is applied to a second layer. The late stage reprojection processing that is applied to the second layer includes one or more transformations that are applied to the second layer. After the late stage reprojection processing on the various layers is complete, a unified layer is created by compositing the layers together. Then, the render the unified layer is rendered.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Ryan Scott Haraden, Jeffrey Powers Bradford, Miguel Comparan, Adam James Muff, Gene Leung, Tolga Ozguner
  • Publication number: 20180260931
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 13, 2018
    Inventors: Tolga OZGUNER, Gene LEUNG, Jeffrey Powers BRADFORD, Adam James MUFF, Miguel COMPARAN, Ryan Scott HARADEN, Christopher Jon JOHNSON
  • Publication number: 20180260120
    Abstract: Systems and methods for controlling access to a memory are provided. The system may include a buffer to store output data generated by a processing module, and provide the output data to a real-time module, and a buffer monitoring circuit to output an underflow approaching state indication in response to an amount of available data in the buffer being less than or equal to a threshold. The system may include a memory access module arranged to receive memory requests issued by the processing module, and configured to, while operating in a first mode, respond to memory requests with corresponding data retrieved from the memory, switch to operating in a second mode in response to receiving the underflow approaching state indication, and in response to operating in the second mode, respond to memory requests indicating the memory access module did not attempt to retrieve corresponding data from the memory.
    Type: Application
    Filed: March 10, 2017
    Publication date: September 13, 2018
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Tolga Ozguner, Ishan Jitendra Bhatt, Miguel Comparan, Ryan Scott Haraden, Jeffrey Powers Bradford, Gene Leung
  • Publication number: 20180211638
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power.
    Type: Application
    Filed: January 25, 2017
    Publication date: July 26, 2018
    Inventors: Tolga Ozguner, Jeffrey Powers Bradford, Miguel Comparan, Gene Leung, Adam James Muff, Ryan Scott Haraden, Christopher Jon Johnson
  • Patent number: 9978118
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: May 22, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Gene Leung, Jeffrey Powers Bradford, Adam James Muff, Miguel Comparan, Ryan Scott Haraden, Christopher Jon Johnson
  • Publication number: 20160224351
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Application
    Filed: April 12, 2016
    Publication date: August 4, 2016
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson
  • Patent number: 9354884
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: May 31, 2016
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson, III
  • Patent number: 9092347
    Abstract: A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage.
    Type: Grant
    Filed: November 25, 2012
    Date of Patent: July 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 9053037
    Abstract: A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage.
    Type: Grant
    Filed: April 4, 2011
    Date of Patent: June 9, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III