Patents by Inventor Jeremy Sandmel

Jeremy Sandmel has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9411550
    Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
    Type: Grant
    Filed: January 20, 2015
    Date of Patent: August 9, 2016
    Assignee: Apple Inc.
    Inventors: John S. Harper, Kenneth C. Dyke, Jeremy Sandmel
  • Publication number: 20160217011
    Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The system includes a host processor, a graphics processing unit (GPU) coupled to the host processor and a memory coupled to at least one of the host processor and the GPU. The parallel computing program is stored in the memory to allocate threads between the host processor and the GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g. GPUs or CPUs, separate from the host processor.
    Type: Application
    Filed: January 27, 2016
    Publication date: July 28, 2016
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Patent number: 9304834
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Grant
    Filed: September 13, 2012
    Date of Patent: April 5, 2016
    Assignee: Apple Inc.
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Patent number: 9292340
    Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The parallel computing program is stored in a memory to allocate threads between a host processor and a GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g GPUs or CPUs, separate from the host processor.
    Type: Grant
    Filed: December 20, 2012
    Date of Patent: March 22, 2016
    Assignee: Apple Inc.
    Inventors: Aaftab AbdulLatif Munshi, Jeremy Sandmel
  • Patent number: 9257101
    Abstract: A method and electronic device employing the method of processing a frame of graphics for display is provided that includes developing a frame in a first software frame processing stage following a first vertical blanking (VBL) heartbeat, issuing a command indicating the first stage is complete, and performing a final software frame processing stage without waiting for a subsequent VBL heartbeat. The method may alternatively include performing the final software frame processing stage regardless as to whether a target framebuffer is available, performing all but final hardware frame processing stages regardless as to whether the target framebuffer is in use, and performing the final hardware processing stage if the target framebuffer is not in use.
    Type: Grant
    Filed: September 14, 2012
    Date of Patent: February 9, 2016
    Assignee: APPLE INC.
    Inventors: Ian Hendry, Jeffry Gonion, Jeremy Sandmel
  • Patent number: 9250956
    Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The system includes a host processor, a graphics processing unit (GPU) coupled to the host processor and a memory coupled to at least one of the host processor and the GPU. The parallel computing program is stored in the memory to allocate threads between the host processor and the GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g GPUs or CPUs, separate from the host processor.
    Type: Grant
    Filed: January 24, 2014
    Date of Patent: February 2, 2016
    Assignee: Apple Inc.
    Inventors: Aaftab AbdulLatif Munshi, Jeremy Sandmel
  • Patent number: 9207971
    Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
    Type: Grant
    Filed: September 13, 2012
    Date of Patent: December 8, 2015
    Assignee: Apple Inc.
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20150317192
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Application
    Filed: May 15, 2015
    Publication date: November 5, 2015
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Patent number: 9058224
    Abstract: A plurality of asynchronous command streams are established. A first command stream shares a common resource with a second command stream. A synchronization object is incorporated into the first command stream. A central server arbitrates serialization of the first and second command streams using the synchronization object. The central server arbitrates serialization without direct communication between the first and second command streams.
    Type: Grant
    Filed: June 3, 2011
    Date of Patent: June 16, 2015
    Assignee: Apple Inc.
    Inventors: Jeremy Sandmel, Kenneth Christian Dyke, Gokhan Avkarogullari, Richard Schreyer
  • Patent number: 9052948
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Grant
    Filed: August 28, 2012
    Date of Patent: June 9, 2015
    Assignee: Apple Inc.
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20150130842
    Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
    Type: Application
    Filed: January 20, 2015
    Publication date: May 14, 2015
    Inventors: John S. Harper, Kenneth C. Dyke, Jeremy Sandmel
  • Patent number: 9013512
    Abstract: Systems, methods, and computer readable media for dynamically setting an executing application's display buffer size are described. To ameliorate display device overscan operations, the size of an executing application's display buffer may be set based on the display device's extent and a display mode. In addition, contents of the executing application's display buffer may be operated on as they are moved to a frame buffer based on the display mode. In one mode, for example, display buffer contents may be scaled before being placed into the frame buffer. In another mode, a black border may be placed around display buffer contents as it is placed into the frame buffer. In yet another mode, display buffer contents may be copied into the frame buffer without further processing.
    Type: Grant
    Filed: February 8, 2012
    Date of Patent: April 21, 2015
    Assignee: Apple Inc.
    Inventors: Jeremy Sandmel, Joshua H. Shaffer, Toby C. Paterson, Patrick Coffman, Geoffrey Stahl, John S. Harper
  • Patent number: 8963799
    Abstract: A data processing system composites graphics content, generated by an application program running on the data processing system, to generate image data. The data processing system stores the image data in a first framebuffer and displays an image generated from the image data in the first framebuffer on an internal display device of the data processing system. A scaler in the data processing system performs scaling operations on the image data in the first framebuffer, stores the scaled image data in a second framebuffer and displays an image generated from the scaled image data in the second framebuffer on an external display device coupled to the data processing system. The scaler performs the scaling operations asynchronously with respect to the compositing of the graphics content. The data processing system automatically mirrors the image on the external display device unless the application program is publishing additional graphics content for display on the external display device.
    Type: Grant
    Filed: June 6, 2011
    Date of Patent: February 24, 2015
    Assignee: Apple Inc.
    Inventors: John S. Harper, Kenneth C. Dyke, Jeremy Sandmel
  • Publication number: 20140201765
    Abstract: A method and an apparatus that execute a parallel computing program in a programming language for a parallel computing architecture are described. The parallel computing program is stored in memory in a system with parallel processors. The system includes a host processor, a graphics processing unit (GPU) coupled to the host processor and a memory coupled to at least one of the host processor and the GPU. The parallel computing program is stored in the memory to allocate threads between the host processor and the GPU. The programming language includes an API to allow an application to make calls using the API to allocate execution of the threads between the host processor and the GPU. The programming language includes host function data tokens for host functions performed in the host processor and kernel function data tokens for compute kernel functions performed in one or more compute processors, e.g GPUs or CPUs, separate from the host processor.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 17, 2014
    Applicant: Apple Inc.
    Inventors: Aaftab AbdulLatif Munshi, Jeremy Sandmel
  • Publication number: 20140201746
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices arc initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 17, 2014
    Applicant: Apple Inc.
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20140201755
    Abstract: A method and an apparatus that allocate one or more physical compute devices such as CPUs or GPUs attached to a host processing unit running an application for executing one or more threads of the application are described. The allocation may be based on data representing a processing capability requirement from the application for executing an executable in the one or more threads. A compute device identifier may be associated with the allocated physical compute devices to schedule and execute the executable in the one or more threads concurrently in one or more of the allocated physical compute devices concurrently.
    Type: Application
    Filed: January 24, 2014
    Publication date: July 17, 2014
    Applicant: Apple Inc.
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20130201197
    Abstract: Systems, methods, and computer readable media for dynamically setting an executing application's display buffer size are described. To ameliorate display device overscan operations, the size of an executing application's display buffer may be set based on the display device's extent and a display mode. In addition, contents of the executing application's display buffer may be operated on as they are moved to a frame buffer based on the display mode. In one mode, for example, display buffer contents may be scaled before being placed into the frame buffer. In another mode, a black border may be placed around display buffer contents as it is placed into the frame buffer. In yet another mode, display buffer contents may be copied into the frame buffer without further processing.
    Type: Application
    Filed: February 8, 2012
    Publication date: August 8, 2013
    Applicant: APPLE INC.
    Inventors: Jeremy Sandmel, Joshua H. Shaffer, Toby C. Paterson, Patrick Coffman, Geoffrey Stahl, John S. Harper
  • Publication number: 20130063451
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Application
    Filed: September 13, 2012
    Publication date: March 14, 2013
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20130055272
    Abstract: A method and an apparatus that schedule a plurality of executables in a schedule queue for execution in one or more physical compute devices such as CPUs or GPUs concurrently are described. One or more executables are compiled online from a source having an existing executable for a type of physical compute devices different from the one or more physical compute devices. Dependency relations among elements corresponding to scheduled executables are determined to select an executable to be executed by a plurality of threads concurrently in more than one of the physical compute devices. A thread initialized for executing an executable in a GPU of the physical compute devices are initialized for execution in another CPU of the physical compute devices if the GPU is busy with graphics processing threads.
    Type: Application
    Filed: August 28, 2012
    Publication date: February 28, 2013
    Inventors: Aaftab Munshi, Jeremy Sandmel
  • Publication number: 20130009975
    Abstract: A method and electronic device employing the method of processing a frame of graphics for display is provided that includes developing a frame in a first software frame processing stage following a first vertical blanking (VBL) heartbeat, issuing a command indicating the first stage is complete, and performing a final software frame processing stage without waiting for a subsequent VBL heartbeat. The method may alternatively include performing the final software frame processing stage regardless as to whether a target framebuffer is available, performing all but final hardware frame processing stages regardless as to whether the target framebuffer is in use, and performing the final hardware processing stage if the target framebuffer is not in use.
    Type: Application
    Filed: September 14, 2012
    Publication date: January 10, 2013
    Applicant: APPLE INC.
    Inventors: Ian Hendry, Jeffry Gonion, Jeremy Sandmel