Patents by Inventor Miguel Comparan

Miguel Comparan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10095408
    Abstract: Systems and methods for controlling access to a memory are provided. The system may include a buffer to store output data generated by a processing module, and provide the output data to a real-time module, and a buffer monitoring circuit to output an underflow approaching state indication in response to an amount of available data in the buffer being less than or equal to a threshold. The system may include a memory access module arranged to receive memory requests issued by the processing module, and configured to, while operating in a first mode, respond to memory requests with corresponding data retrieved from the memory, switch to operating in a second mode in response to receiving the underflow approaching state indication, and in response to operating in the second mode, respond to memory requests indicating the memory access module did not attempt to retrieve corresponding data from the memory.
    Type: Grant
    Filed: March 10, 2017
    Date of Patent: October 9, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Ishan Jitendra Bhatt, Miguel Comparan, Ryan Scott Haraden, Jeffrey Powers Bradford, Gene Leung
  • Publication number: 20180276824
    Abstract: Optimizations are provided for late stage reprojection processing for a multi-layered scene. A scene is generated, which is based on a predicted pose of a portion of a computer system. A sub-region is identified within one of the layers and is isolated from the other regions in the scene. Thereafter, late stage reprojection processing is applied to that sub-region selectively/differently than other regions in the scene that do not undergo the same late state reprojection processing.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Ryan Scott Haraden, Jeffrey Powers Bradford, Miguel Comparan, Adam James Muff, Gene Leung, Tolga Ozguner
  • Publication number: 20180275748
    Abstract: Optimizations are provided for late stage reprojection processing for a multi-layered scene. A multi-layered scene is generated. Late stage reprojection processing is applied to a first layer and different late stage reprojection processing is applied to a second layer. The late stage reprojection processing that is applied to the second layer includes one or more transformations that are applied to the second layer. After the late stage reprojection processing on the various layers is complete, a unified layer is created by compositing the layers together. Then, the render the unified layer is rendered.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Ryan Scott Haraden, Jeffrey Powers Bradford, Miguel Comparan, Adam James Muff, Gene Leung, Tolga Ozguner
  • Publication number: 20180260931
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data.
    Type: Application
    Filed: May 15, 2018
    Publication date: September 13, 2018
    Inventors: Tolga OZGUNER, Gene LEUNG, Jeffrey Powers BRADFORD, Adam James MUFF, Miguel COMPARAN, Ryan Scott HARADEN, Christopher Jon JOHNSON
  • Publication number: 20180260120
    Abstract: Systems and methods for controlling access to a memory are provided. The system may include a buffer to store output data generated by a processing module, and provide the output data to a real-time module, and a buffer monitoring circuit to output an underflow approaching state indication in response to an amount of available data in the buffer being less than or equal to a threshold. The system may include a memory access module arranged to receive memory requests issued by the processing module, and configured to, while operating in a first mode, respond to memory requests with corresponding data retrieved from the memory, switch to operating in a second mode in response to receiving the underflow approaching state indication, and in response to operating in the second mode, respond to memory requests indicating the memory access module did not attempt to retrieve corresponding data from the memory.
    Type: Application
    Filed: March 10, 2017
    Publication date: September 13, 2018
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Tolga Ozguner, Ishan Jitendra Bhatt, Miguel Comparan, Ryan Scott Haraden, Jeffrey Powers Bradford, Gene Leung
  • Publication number: 20180211638
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power.
    Type: Application
    Filed: January 25, 2017
    Publication date: July 26, 2018
    Inventors: Tolga Ozguner, Jeffrey Powers Bradford, Miguel Comparan, Gene Leung, Adam James Muff, Ryan Scott Haraden, Christopher Jon Johnson
  • Patent number: 9978118
    Abstract: Systems and methods are disclosed herein for providing improved cache structures and methods that are optimally sized to support a predetermined range of late stage adjustments and in which image data is intelligently read out of DRAM and cached in such a way as to eliminate re-fetching of input image data from DRAM and minimize DRAM bandwidth and power. The systems and methods can also be adapted to work with compressed image data.
    Type: Grant
    Filed: January 25, 2017
    Date of Patent: May 22, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tolga Ozguner, Gene Leung, Jeffrey Powers Bradford, Adam James Muff, Miguel Comparan, Ryan Scott Haraden, Christopher Jon Johnson
  • Publication number: 20160224351
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Application
    Filed: April 12, 2016
    Publication date: August 4, 2016
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson
  • Patent number: 9354884
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Grant
    Filed: March 13, 2013
    Date of Patent: May 31, 2016
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson, III
  • Patent number: 9092347
    Abstract: A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage.
    Type: Grant
    Filed: November 25, 2012
    Date of Patent: July 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 9053037
    Abstract: A method and apparatus dynamically allocates and deallocates a portion of a cache for use as a dedicated local storage. Cache lines may be dynamically allocated and deallocated for inclusion in the dedicated local storage. Cache entries that are included in the dedicated local storage may not be evicted or invalidated. Additionally, coherence is not maintained between the cache entries that are included in the dedicated local storage and the backing memory. A load instruction may be configured to allocate, e.g., lock, a portion of the data cache for inclusion in the dedicated local storage and load data into the dedicated local storage. A load instruction may be configured to read data from the dedicated local storage and to deallocate, e.g., unlock, a portion of the data cache that was included in the dedicated local storage.
    Type: Grant
    Filed: April 4, 2011
    Date of Patent: June 9, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 9021237
    Abstract: A method and circuit arrangement utilize a low latency variable transfer network between the register files of multiple processing cores in a multi-core processor chip to support fine grained parallelism of virtual threads across multiple hardware threads. The communication of a variable over the variable transfer network may be initiated by a move from a local register in a register file of a source processing core to a variable register that is allocated to a destination hardware thread in a destination processing core, so that the destination hardware thread can then move the variable from the variable register to a local register in the destination processing core.
    Type: Grant
    Filed: December 20, 2011
    Date of Patent: April 28, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 8954973
    Abstract: A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: February 10, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 8949836
    Abstract: A method and apparatus for transferring architected state bypasses system memory by directly transmitting architected state between processor cores over a dedicated interconnect. The transfer may be performed by state transfer interface circuitry with or without software interaction. The architected state for a thread may be transferred from a first processing core to a second processing core when the state transfer interface circuitry detects an error that prevents proper execution of the thread corresponding to the architected state. A program instruction may be used to initiate the transfer of the architected state for the thread to one or more other threads in order to parallelize execution of the thread or perform load balancing between multiple processor cores by distributing processing of multiple threads.
    Type: Grant
    Filed: April 1, 2011
    Date of Patent: February 3, 2015
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell D. Hoover, Robert A. Shearer, Alfred T. Watson, III
  • Patent number: 8856602
    Abstract: A method and circuit arrangement utilize scan logic disposed on a multi-core processor integrated circuit device or chip to perform internal voting-based built in self test (BIST) of the chip. Test patterns are generated internally on the chip and communicated to the scan chains within multiple processing cores on the chip. Test results output by the scan chains are compared with one another on the chip, and majority voting is used to identify outlier test results that are indicative of a faulty processing core. A bit position in a faulty test result may be used to identify a faulty latch in a scan chain and/or a faulty functional unit in the faulty processing core, and a faulty processing core and/or a faulty functional unit may be automatically disabled in response to the testing.
    Type: Grant
    Filed: December 20, 2011
    Date of Patent: October 7, 2014
    Assignee: International Business Machines Corporation
    Inventors: Jeffrey D. Brown, Miguel Comparan, Robert A. Shearer, Alfred T. Watson, III
  • Publication number: 20140281402
    Abstract: A method and circuit arrangement provide support for a hybrid pipeline that dynamically switches between out-of-order and in-order modes. The hybrid pipeline may selectively execute instructions from at least one instruction stream that require the high performance capabilities provided by out-of-order processing in the out-of-order mode. The hybrid pipeline may also execute instructions that have strict power requirements in the in-order mode where the in-order mode conserves more power compared to the out-of-order mode. Each stage in the hybrid pipeline may be activated and fully functional when the hybrid pipeline is in the out-of-order mode. However, stages in the hybrid pipeline not used for the in-order mode may be deactivated and bypassed by the instructions when the hybrid pipeline dynamically switches from the out-of-order mode to the in-order mode. The deactivated stages may then be reactivated when the hybrid pipeline dynamically switches from the in-order mode to the out-of-order mode.
    Type: Application
    Filed: March 13, 2013
    Publication date: September 18, 2014
    Applicant: International Business Machines Corporation
    Inventors: Miguel Comparan, Andrew D. Hilton, Hans M. Jacobson, Brian M. Rogers, Robert A. Shearer, Ken V. Vu, Alfred T. Watson, III
  • Patent number: 8719507
    Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.
    Type: Grant
    Filed: January 4, 2012
    Date of Patent: May 6, 2014
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Robert A. Shearer
  • Patent number: 8719508
    Abstract: Parallel computing environments, where threads executing in neighboring processors may access the same set of data, may be designed and configured to share one or more levels of cache memory. Before a processor forwards a request for data to a higher level of cache memory following a cache miss, the processor may determine whether a neighboring processor has the data stored in a local cache memory. If so, the processor may forward the request to the neighboring processor to retrieve the data. Because access to the cache memories for the two processors is shared, the effective size of the memory is increased. This may advantageously decrease cache misses for each level of shared cache memory without increasing the individual size of the caches on the processor chip.
    Type: Grant
    Filed: December 10, 2012
    Date of Patent: May 6, 2014
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Robert A. Shearer
  • Patent number: 8560897
    Abstract: A technique for managing hard failures in a memory system employing a locking is disclosed. An error count is maintained for units of memory within the memory system. When the error count indicates a hard failure, the unit of memory is locked out from further use. An arbitrary set of error counters are assigned to record errors resulting from access to the units of memory. Embodiments of the present invention advantageously enable a system to continue reliable operation even after one or more internal hard memory failures. Other embodiments advantageously enable manufacturers to salvage partially failed devices and deploy the devices as having a lower-performance specification rather than discarding the devices, as would otherwise be indicated by conventional practice.
    Type: Grant
    Filed: December 7, 2010
    Date of Patent: October 15, 2013
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Mark G. Kupferschmidt, Robert A. Shearer
  • Patent number: 8493398
    Abstract: A method and apparatus for processing vector data is provided. A processing core may have a data cache and a relatively smaller vector data cache. The vector data cache may be optimally sized to store vector data structures that are smaller than full data cache lines.
    Type: Grant
    Filed: January 14, 2008
    Date of Patent: July 23, 2013
    Assignee: International Business Machines Corporation
    Inventors: Miguel Comparan, Russell Dean Hoover, Eric Oliver Mejdrich