Patents by Inventor Michael Imbrogno

Michael Imbrogno has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11593175
    Abstract: In general, embodiments are disclosed herein for tracking and allocating graphics hardware resources. In one embodiment, a software and/or firmware process constructs a cross-application command queue utilization table based on one or more specified command queue quality of service (QoS) settings, in order to track the target and current utilization rates of each command queue on the graphics hardware over a given frame and to load work onto the graphics hardware in accordance with the utilization table. Based on the constructed utilization table for a given frame, any command queues that have exceed their respective target utilization value may be moved to an “inactive” status for the duration of the current frame. For any command queues that remain in an “active” status for the current frame, work from those command queues may be loaded on to slots of the appropriate data masters of the graphics hardware in any desired order.
    Type: Grant
    Filed: May 2, 2022
    Date of Patent: February 28, 2023
    Assignee: Apple Inc.
    Inventors: Kutty Banerjee, Michael Imbrogno
  • Patent number: 11436055
    Abstract: A first command is fetched for execution on a GPU. Dependency information for the first command, which indicates a number of parent commands that the first command depends on, is determined. The first command is inserted into an execution graph based on the dependency information. The execution graph defines an order of execution for plural commands including the first command. The number of parent commands are configured to be executed on the GPU before executing the first command. A wait count for the first command, which indicates the number of parent commands of the first command, is determined based on the execution graph. The first command is inserted into cache memory in response to determining that the wait count for the first command is zero or that each of the number of parent commands the first command depends on has already been inserted into the cache memory.
    Type: Grant
    Filed: November 19, 2019
    Date of Patent: September 6, 2022
    Assignee: Apple Inc.
    Inventors: Kutty Banerjee, Michael Imbrogno
  • Patent number: 11430174
    Abstract: Techniques are disclosed relating to specifying memory consistency constraints. In some embodiments, an instruction may specify, for a memory operation, a type of memory consistency and a scope at which to enforce the type of consistency. For example, these fields may specify whether to sequence memory accesses relative to the operation at one or more of multiple different cache levels based on the type of memory consistency and the scope.
    Type: Grant
    Filed: January 15, 2021
    Date of Patent: August 30, 2022
    Assignee: Apple Inc.
    Inventors: Terence M. Potter, Richard W. Schreyer, James J. Ding, Alexander K. Kan, Michael Imbrogno
  • Publication number: 20220261290
    Abstract: In general, embodiments are disclosed herein for tracking and allocating graphics hardware resources. In one embodiment, a software and/or firmware process constructs a cross-application command queue utilization table based on one or more specified command queue quality of service (QoS) settings, in order to track the target and current utilization rates of each command queue on the graphics hardware over a given frame and to load work onto the graphics hardware in accordance with the utilization table. Based on the constructed utilization table for a given frame, any command queues that have exceed their respective target utilization value may be moved to an “inactive” status for the duration of the current frame. For any command queues that remain in an “active” status for the current frame, work from those command queues may be loaded on to slots of the appropriate data masters of the graphics hardware in any desired order.
    Type: Application
    Filed: May 2, 2022
    Publication date: August 18, 2022
    Inventors: Kutty Banerjee, Michael Imbrogno
  • Patent number: 11403223
    Abstract: Systems, methods, and computer readable media to manage memory cache for graphics processing are described. A processor creates a resource group for a plurality of graphics application program interface (API) resources. The processor subsequently encodes a set command that references the resource group within a command buffer and assigns a data set identifier (DSID) to the resource group. The processor also encodes a write command within the command buffer that causes the graphics processor to write data within a cache line and mark the written cache line with the DSID, a read command that causes the graphics processor to read data written into the resource group, and a de-prioritize command that causes the graphics processor to notify the memory cache to later flush content from the cache line associated with the DSID and to later invalidate the cache line when higher priority content is received.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: August 2, 2022
    Assignee: Apple Inc.
    Inventors: Rohan Sehgal, Michael Imbrogno
  • Patent number: 11321134
    Abstract: In general, embodiments are disclosed herein for tracking and allocating graphics hardware resources. In one embodiment, a software and/or firmware process constructs a cross-application command queue utilization table based on one or more specified command queue quality of service (QoS) settings, in order to track the target and current utilization rates of each command queue on the graphics hardware over a given frame and to load work onto the graphics hardware in accordance with the utilization table. Based on the constructed utilization table for a given frame, any command queues that have exceed their respective target utilization value may be moved to an “inactive” status for the duration of the current frame. For any command queues that remain in an “active” status for the current frame, work from those command queues may be loaded on to slots of the appropriate data masters of the graphics hardware in any desired order.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: May 3, 2022
    Assignee: Apple Inc.
    Inventors: Kutty Banerjee, Michael Imbrogno
  • Publication number: 20220118379
    Abstract: Method of separation of a radiometal ion from a target metal ion, comprising a first liquid-liquid extraction step in which an organic phase comprising an extractant and an interfacial tension modifier is mixed with an aqueous phase comprising the radiometal ion and the target metal ion in order that the radiometal ion is at least partially transferred to the organic phase, followed by a first phase separation step, wherein the phase separation is carried out in flow comprising the use of a microfiltration membrane to separate the phases based on the interfacial tension between the phases such that a permeate phase passes through the membrane and a retentate phase does not.
    Type: Application
    Filed: August 6, 2019
    Publication date: April 21, 2022
    Inventors: Fedor ZHURAVLEV, Kristina Søborg PEDERSEN, Jesper FONSLET, Joseph Michael IMBROGNO, Andrea ADAMO, Klavs F. JENSEN
  • Patent number: 11237967
    Abstract: Systems, methods, and computer readable media to manage memory cache for graphics processing are described. A processor creates a resource group for a plurality of graphics application program interface (API) resources. The processor subsequently encodes a set command that references the resource group within a command buffer and assigns a data set identifier (DSID) to the resource group. The processor also encodes a write command within the command buffer that causes the graphics processor to write data within a cache line and mark the written cache line with the DSID, a read command that causes the graphics processor to read data written into the resource group, and a de-prioritize command that causes the graphics processor to notify the memory cache to later flush content from the cache line associated with the DSID and to later invalidate the cache line when higher priority content is received.
    Type: Grant
    Filed: February 6, 2020
    Date of Patent: February 1, 2022
    Assignee: Apple Inc.
    Inventors: Rohan Sehgal, Michael Imbrogno
  • Patent number: 11120591
    Abstract: One disclosed embodiment includes a method of graphics processing. The method includes receiving a first function, wherein the first function indicates a desired sampling rate for image content, wherein the desired sampling rate differs in a first location along a first axial direction and a second location along the first axial direction, and wherein the image content is divided into a plurality of tiles, determining a first rasterization rate for each tile of the plurality of tiles based, at least in part, on the desired sampling rate indicated by the first function corresponding to each respective tile, receiving one or more primitives associated with content for display, rasterizing at least a portion of a primitive associated with a respective tile based, at least in part, on the determined first rasterization rate for the respective tile, and displaying an image based on the rasterized portion of the primitive.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: September 14, 2021
    Assignee: Apple Inc.
    Inventors: Michal Valient, Michael Imbrogno, Rohan Sehgal, Kyle C. Piddington, Matthijs L. van der Meide
  • Patent number: 11094036
    Abstract: The disclosure pertains to techniques for operation of graphics systems and task execution on a graphics processor.
    Type: Grant
    Filed: April 16, 2020
    Date of Patent: August 17, 2021
    Assignee: Apple Inc.
    Inventors: Michal Valient, Sean P. James, Gokhan Avkarogullari, Alexander K. Kan, Michael Imbrogno
  • Patent number: 11010863
    Abstract: A computer-implemented technique for accessing textures by a graphics processing unit (GPU), includes determining a frequency with which a first texture is expected to be accessed by an application executing on a GPU, determining a frequency with which a second texture is expected to be accessed by an application executing on the GPU, determining to load memory address information associated with the first texture into a GPU register when the frequency is greater than or equal to a threshold frequency value, determining to load memory address information associated with the second texture into a buffer memory when the frequency is less than the threshold frequency value, receiving a draw call utilizing the texture, rendering the draw call using the first texture by accessing the memory address information in the GPU register, and the second texture by accessing the memory address information in the buffer memory.
    Type: Grant
    Filed: February 10, 2020
    Date of Patent: May 18, 2021
    Assignee: Apple Inc.
    Inventors: Michael Imbrogno, Sivert Berg, Nicholas H. Smith
  • Publication number: 20210134045
    Abstract: Techniques are disclosed relating to specifying memory consistency constraints. In some embodiments, an instruction may specify, for a memory operation, a type of memory consistency and a scope at which to enforce the type of consistency. For example, these fields may specify whether to sequence memory accesses relative to the operation at one or more of multiple different cache levels based on the type of memory consistency and the scope.
    Type: Application
    Filed: January 15, 2021
    Publication date: May 6, 2021
    Inventors: Terence M. Potter, Richard W. Schreyer, James J. Ding, Alexander K. Kan, Michael Imbrogno
  • Publication number: 20210096921
    Abstract: A first command is fetched for execution on a GPU. Dependency information for the first command, which indicates a number of parent commands that the first command depends on, is determined. The first command is inserted into an execution graph based on the dependency information. The execution graph defines an order of execution for plural commands including the first command. The number of parent commands are configured to be executed on the GPU before executing the first command. A wait count for the first command, which indicates the number of parent commands of the first command, is determined based on the execution graph. The first command is inserted into cache memory in response to determining that the wait count for the first command is zero or that each of the number of parent commands the first command depends on has already been inserted into the cache memory.
    Type: Application
    Filed: November 19, 2019
    Publication date: April 1, 2021
    Inventors: Kutty Banerjee, Michael Imbrogno
  • Publication number: 20210097643
    Abstract: A computer-implemented technique for accessing textures by a graphics processing unit (GPU), includes determining a frequency with which a first texture is expected to be accessed by an application executing on a GPU, determining a frequency with which a second texture is expected to be accessed by an application executing on the GPU, determining to load memory address information associated with the first texture into a GPU register when the frequency is greater than or equal to a threshold frequency value, determining to load memory address information associated with the second texture into a buffer memory when the frequency is less than the threshold frequency value, receiving a draw call utilizing the texture, rendering the draw call using the first texture by accessing the memory address information in the GPU register, and the second texture by accessing the memory address information in the buffer memory.
    Type: Application
    Filed: February 10, 2020
    Publication date: April 1, 2021
    Inventors: Michael Imbrogno, Sivert Berg, Nicholas H. Smith
  • Publication number: 20210096994
    Abstract: Systems, methods, and computer readable media to manage memory cache for graphics processing are described. A processor creates a resource group for a plurality of graphics application program interface (API) resources. The processor subsequently encodes a set command that references the resource group within a command buffer and assigns a data set identifier (DSID) to the resource group. The processor also encodes a write command within the command buffer that causes the graphics processor to write data within a cache line and mark the written cache line with the DSID, a read command that causes the graphics processor to read data written into the resource group, and a de-prioritize command that causes the graphics processor to notify the memory cache to later flush content from the cache line associated with the DSID and to later invalidate the cache line when higher priority content is received.
    Type: Application
    Filed: February 6, 2020
    Publication date: April 1, 2021
    Inventors: Rohan Sehgal, Michael Imbrogno
  • Patent number: 10930047
    Abstract: Techniques are disclosed relating to synchronizing access to pixel resources. Examples of pixel resources include color attachments, a stencil buffer, and a depth buffer. In some embodiments, hardware registers are used to track status of assigned pixel resources and pixel wait and pixel release instruction are used to synchronize access to the pixel resources. In some embodiments, other accesses to the pixel resources may occur out of program order. Relative to tracking and ordering pass groups, this weak ordering and explicit synchronization may improve performance and reduce power consumption. Disclosed techniques may also facilitate coordination between fragment rendering threads and auxiliary mid-render compute tasks.
    Type: Grant
    Filed: December 9, 2019
    Date of Patent: February 23, 2021
    Assignee: Apple Inc.
    Inventors: Terence M. Potter, Richard W. Schreyer, James J. Ding, Alexander K. Kan, Michael Imbrogno
  • Patent number: 10896525
    Abstract: This disclosure includes example embodiments of graphics processor memory management systems that support the use of graphical textures that are not fully bound or “backed” in memory throughout their entire lifespans. Such graphical textures are referred to herein as “sparse textures.” According to some embodiments, sparse textures may be split into fixed-dimension pages in memory wherein, during execution, a user may indicate a desire to map certain pages to physical memory locations and populate such pages with the underlying data. In other embodiments, statistical information obtained from the graphics processor is used to aid in the determination of whether or not a given texture (or portion of a texture) needs physical memory backing. In yet other embodiments, the graphics processor may also enforce ordering guarantees, e.g., in instances when there are fewer pages in memory available than there is a need for backing of at a given moment in time.
    Type: Grant
    Filed: May 31, 2019
    Date of Patent: January 19, 2021
    Assignee: Apple Inc.
    Inventors: Michal Valient, Michael Imbrogno, Karol E. Czaradzki, Narayanan Swaminathan
  • Publication number: 20200380734
    Abstract: This disclosure includes example embodiments of graphics processor memory management systems that support the use of graphical textures that are not fully bound or “backed” in memory throughout their entire lifespans. Such graphical textures are referred to herein as “sparse textures.” According to some embodiments, sparse textures may be split into fixed-dimension pages in memory wherein, during execution, a user may indicate a desire to map certain pages to physical memory locations and populate such pages with the underlying data. In other embodiments, statistical information obtained from the graphics processor is used to aid in the determination of whether or not a given texture (or portion of a texture) needs physical memory backing. In yet other embodiments, the graphics processor may also enforce ordering guarantees, e.g., in instances when there are fewer pages in memory available than there is a need for backing of at a given moment in time.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Michal Valient, Michael Imbrogno, Karol E. Czaradzki, Narayanan Swaminathan
  • Publication number: 20200380744
    Abstract: One disclosed embodiment includes a method of graphics processing. The method includes receiving a first function, wherein the first function indicates a desired sampling rate for image content, wherein the desired sampling rate differs in a first location along a first axial direction and a second location along the first axial direction, and wherein the image content is divided into a plurality of tiles, determining a first rasterization rate for each tile of the plurality of tiles based, at least in part, on the desired sampling rate indicated by the first function corresponding to each respective tile, receiving one or more primitives associated with content for display, rasterizing at least a portion of a primitive associated with a respective tile based, at least in part, on the determined first rasterization rate for the respective tile, and displaying an image based on the rasterized portion of the primitive.
    Type: Application
    Filed: May 31, 2019
    Publication date: December 3, 2020
    Inventors: Michal Valient, Michael Imbrogno, Rohan Sehgal, Kyle C. Piddington, Matthijs L. van der Meide
  • Publication number: 20200379815
    Abstract: In general, embodiments are disclosed herein for tracking and allocating graphics hardware resources. In one embodiment, a software and/or firmware process constructs a cross-application command queue utilization table based on one or more specified command queue quality of service (QoS) settings, in order to track the target and current utilization rates of each command queue on the graphics hardware over a given frame and to load work onto the graphics hardware in accordance with the utilization table. Based on the constructed utilization table for a given frame, any command queues that have exceed their respective target utilization value may be moved to an “inactive” status for the duration of the current frame. For any command queues that remain in an “active” status for the current frame, work from those command queues may be loaded on to slots of the appropriate data masters of the graphics hardware in any desired order.
    Type: Application
    Filed: February 20, 2020
    Publication date: December 3, 2020
    Inventors: Kutty Banerjee, Michael Imbrogno