Patents by Inventor Ke Yin
Ke Yin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200098084Abstract: The described embodiments include systems, methods, and apparatuses for increased efficiency processing flow. One method includes a plurality of stages configured to process an execution graph that includes a plurality of logical nodes with defined properties and resources associated with each logical node of the plurality of logical nodes, a recirculating ring buffer, wherein the recirculating ring buffer is configured to holding only any one of a control information, input, and, or out data necessary to stream a temporary data between each logical node of the execution graph, and a data producer, wherein the data producer is configured to stall from writing control information into a command buffer upon the command buffer being full, preventing command buffer over-writing.Type: ApplicationFiled: November 28, 2019Publication date: March 26, 2020Applicant: ThinCI, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 10540740Abstract: The claimed invention discloses system comprising a plurality of logical nodes comprised in a single or plurality of stages, with defined properties and resources associated with each node, for reducing compute resources, said system further comprising: at least a recirculating ring buffer holding only any one of a control information, input, and, or out data necessary to stream a temporary data between node and, or nodes in an execution graph, thereby reducing size of said recirculating ring buffer; said recirculating ring buffer being sufficiently reduced in size to reside in an on-chip cache, such that any one of the control information, input, and, or out data between node and, or nodes need not be stored in memory; wherein the control information further comprises a command related to invalidating any one of the input and, or out data held in a recirculating ring data buffer, clearing the buffer of tasked data; and wherein a producer is stalled from writing any more control information into a recirculatiType: GrantFiled: May 18, 2019Date of Patent: January 21, 2020Assignee: Blaize, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Publication number: 20190332429Abstract: Systems and methods are disclosures for scheduling code in a multiprocessor system. Code is portioned into code blocks by a compiler. The compiler schedules execution of code blocks in nodes. The nodes are connected in a directed acyclical graph with a top node, terminal node and a plurality of intermediate nodes. Execution of the top node is initiated by the compiler. After executing at least one instance of the top node, an instruction in the code block indicates to the scheduler to initiate at least one intermediary node. The scheduler schedules a thread for execution of the intermediary node. The data for the nodes resides in a plurality of data buffers; the index to the data buffer is stored in a command buffer.Type: ApplicationFiled: July 8, 2019Publication date: October 31, 2019Applicant: ThinCl, Inc.Inventors: Satyaki Koneru, Val G. Cook, Ke Yin
-
Publication number: 20190325551Abstract: The claimed invention discloses system comprising a plurality of logical nodes comprised in a single or plurality of stages, with defined properties and resources associated with each node, for reducing compute resources, said system further comprising: at least a recirculating ring buffer holding only any one of a control information, input, and, or out data necessary to stream a temporary data between node and, or nodes in an execution graph, thereby reducing size of said recirculating ring buffer; said recirculating ring buffer being sufficiently reduced in size to reside in an on-chip cache, such that any one of the control information, input, and, or out data between node and, or nodes need not be stored in memory; wherein the control information further comprises a command related to invalidating any one of the input and, or out data held in a recirculating ring data buffer, clearing the buffer of tasked data; and wherein a producer is stalled from writing any more control information into a recirculatiType: ApplicationFiled: May 18, 2019Publication date: October 24, 2019Applicant: Thinci, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 10437637Abstract: Systems and methods are disclosures for scheduling code in a multiprocessor system. Code is portioned into code blocks by a compiler. The compiler schedules execution of code blocks in nodes. The nodes are connected in a directed acyclical graph with a top node, terminal node and a plurality of intermediate nodes. Execution of the top node is initiated by the compiler. After executing at least one instance of the top node, an instruction in the code block indicates to the scheduler to initiate at least one intermediary node. The scheduler schedules a thread for execution of the intermediary node. The data for the nodes resides in a plurality of data buffers; the index to the data buffer is stored in a command buffer.Type: GrantFiled: May 25, 2016Date of Patent: October 8, 2019Assignee: Thin CI, Inc.Inventors: Satyaki Koneru, Val G Cook, Ke Yin
-
Publication number: 20190235917Abstract: Systems, apparatuses and methods are disclosed for scheduling threads comprising of code blocks in a graph streaming processor (GSP) system. One system includes a scheduler for scheduling plurality of threads, the plurality of threads includes a set of instructions operating on the graph streaming processors of GSP system. The scheduler comprises a plurality of stages where each stage is coupled to an input command buffer and an output command buffer. A portion of the scheduler is implemented in hardware and comprises of a command parser operative to interpret commands within a corresponding input command buffer, a thread generator coupled to the command parser operate to generate the plurality of threads, and a thread scheduler coupled to the thread generator for dispatching the plurality of threads for operating on the plurality of graph streaming processors.Type: ApplicationFiled: April 14, 2019Publication date: August 1, 2019Applicant: ThinCl, Inc.Inventors: Satyaki Koneru, Val G. Cook, Ke Yin
-
Patent number: 10311542Abstract: The claimed invention discloses system comprising a plurality of logical nodes comprised in a single or plurality of stages, with defined properties and resources associated with each node, for reducing compute resources, said system further comprising: at least a recirculating ring buffer holding only any one of a control information, input, and, or out data necessary to stream a temporary data between node and, or nodes in an execution graph, thereby reducing size of said recirculating ring buffer; said recirculating ring buffer being sufficiently reduced in size to reside in an on-chip cache, such that any one of the control information, input, and, or out data between node and, or nodes need not be stored in memory; wherein the control information further comprises a command related to invalidating any one of the input and, or out data held in a recirculating ring data buffer, clearing the buffer of tasked data; and wherein a producer is stalled from writing any more control information into a recirculatiType: GrantFiled: March 6, 2017Date of Patent: June 4, 2019Assignee: THINCI, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Publication number: 20180253890Abstract: The claimed invention discloses system comprising a plurality of logical nodes comprised in a single or plurality of stages, with defined properties and resources associated with each node, for reducing compute resources, said system further comprising: at least a recirculating ring buffer holding only any one of a control information, input, and, or out data necessary to stream a temporary data between node and, or nodes in an execution graph, thereby reducing size of said recirculating ring buffer; said recirculating ring buffer being sufficiently reduced in size to reside in an on-chip cache, such that any one of the control information, input, and, or out data between node and, or nodes need not be stored in memory; wherein the control information further comprises a command related to invalidating any one of the input and, or out data held in a recirculating ring data buffer, clearing the buffer of tasked data; and wherein a producer is stalled from writing any more control information into a recirculatiType: ApplicationFiled: March 6, 2017Publication date: September 6, 2018Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Publication number: 20170365237Abstract: Methods, systems and apparatuses for processing a plurality of threads of a single-instruction multiple data (SIMD) group are disclosed. One method includes initializing a current instruction pointer of the SIMD group, initializing a thread instruction pointer for each of the plurality of threads of the SIMD group including setting a flag for each of the plurality of threads, determining whether a current instruction of the processing includes a conditional branch, resetting a flag of each thread of the plurality of threads that fails a condition of the conditional branch, and setting the thread instruction pointer for each of the plurality of threads that fails the condition of the conditional branch to a jump instruction pointer, and incrementing the current instruction pointer and each thread instruction pointer of the threads that do not fail, if at least one of the threads do not fail the condition.Type: ApplicationFiled: August 17, 2017Publication date: December 21, 2017Applicant: ThinCl, Inc.Inventors: Satyaki Koneru, Ke Yin
-
Publication number: 20170193630Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes reading data from memory of the server system, checking if the data is being read for the first time, checking if the data was written by a processor of the server system during processing, comprising checking if the data is available on a client system or present in a transmit buffer, placing the data in the transmit buffer if the data is being read for the first time and was not written by the processor during the processing as determined by the checking if the data was written by the processor of the server system during processing, wherein if the data is being read for the first time and was written by the processor of the server system during processing the data is not placed in the transmit buffer.Type: ApplicationFiled: March 22, 2017Publication date: July 6, 2017Applicant: ThinCl, Inc.Inventors: Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 9640150Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes reading data from memory of the server system, checking if the data is being read for the first time, checking if the data was written by a processor of the server system during processing, comprising checking if the data is available on a client system or present in a transmit buffer, placing the data in the transmit buffer if the data is being read for the first time and was not written by the processor during the processing as determined by the checking if the data was written by the processor of the server system during processing, wherein if the data is being read for the first time and was written by the processor of the server system during processing the data is not placed in the transmit buffer.Type: GrantFiled: May 19, 2016Date of Patent: May 2, 2017Assignee: ThinCI, Inc.Inventors: Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 9589388Abstract: Embodiments disclosed include a mechanism in a system and method for significantly reducing power consumption by reducing computation and bandwidth. This mechanism is particularly applicable for modern 3D synthetic images which contain high pixel overdraw and dynamically generated intermediates images. Only blocks of computation which contribute to the final image are performed. This is accomplished by rendering in reverse order and by performing multiple visibility sort in a streaming fashion through the pipeline. Rendering of dynamically generated intermediate images is performed sparsely by projecting texture coordinates from a current image back into one or more dependent images in a recursive manner. The newly computed pixel values are then filtered and control is returned to the sampling shader of the current image. When only visible pixels are projected optimal computation is performed. Several implementations are presented with increasing efficiency.Type: GrantFiled: July 9, 2014Date of Patent: March 7, 2017Assignee: ThinCI, Inc.Inventors: Val G. Cook, Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Publication number: 20160267889Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes reading data from memory of the server system, checking if the data is being read for the first time, checking if the data was written by a processor of the server system during processing, comprising checking if the data is available on a client system or present in a transmit buffer, placing the data in the transmit buffer if the data is being read for the first time and was not written by the processor during the processing as determined by the checking if the data was written by the processor of the server system during processing, wherein if the data is being read for the first time and was written by the processor of the server system during processing the data is not placed in the transmit buffer.Type: ApplicationFiled: May 19, 2016Publication date: September 15, 2016Applicant: ThinCl, Inc.Inventors: Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 9373152Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes a plurality of graphic render passes, wherein one or more of the graphics render passes includes reading data from graphics memory of the server system. The data read from the graphics memory is placed in a transmit buffer if the data is being read for the first time, and was not written by a processor of the server system. One system includes a server system including graphics memory, a frame buffer and a processor. The server system is operable to read data from the graphics memory. The server system is operable to place the data in a transmit buffer if the data is being read for the first time, and was not written by the processor during rendering.Type: GrantFiled: May 25, 2014Date of Patent: June 21, 2016Assignee: ThinCI, Inc.Inventors: Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Publication number: 20140253563Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes a plurality of graphic render passes, wherein one or more of the graphics render passes includes reading data from graphics memory of the server system. The data read from the graphics memory is placed in a transmit buffer if the data is being read for the first time, and was not written by a processor of the server system. One system includes a server system including graphics memory, a frame buffer and a processor. The server system is operable to read data from the graphics memory. The server system is operable to place the data in a transmit buffer if the data is being read for the first time, and was not written by the processor during rendering.Type: ApplicationFiled: May 25, 2014Publication date: September 11, 2014Applicant: THINCL, INC.Inventors: Satyaki Koneru, Ke Yin, Dinakar C. Munagala
-
Patent number: 8754900Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes reading data from graphics memory of the server system. The data read from the graphics memory is placed in a transmit buffer if the data is being read for the first time, and was not written by a processor of the server system. One system includes a server system including graphics memory, a frame buffer and a processor. The server system is operable to read data from the graphics memory. The server system is operable to place the data in a transmit buffer if the data is being read for the first time, and was not written by the processor during rendering.Type: GrantFiled: June 16, 2011Date of Patent: June 17, 2014Inventors: Satyaki Koneru, Ke Yin, Dinakar Munagala
-
Publication number: 20110310105Abstract: Methods, systems and apparatuses for selecting graphics data of a server system for transmission are disclosed. One method includes reading data from graphics memory of the server system. The data read from the graphics memory is placed in a transmit buffer if the data is being read for the first time, and was not written by a processor of the server system. One system includes a server system including graphics memory, a frame buffer and a processor. The server system is operable to read data from the graphics memory. The server system is operable to place the data in a transmit buffer if the data is being read for the first time, and was not written by the processor during rendering.Type: ApplicationFiled: June 16, 2011Publication date: December 22, 2011Applicant: THINCI INC.Inventors: Satyaki Koneru, Ke Yin, Dinakar Munagala
-
Patent number: 7802146Abstract: Provided are a method and system for loading test data into execution units in a graphics card to test the execution units. Test instructions are loaded into a cache in a graphics module comprising multiple execution units coupled to the cache on a bus during a design test mode. The cache instructions are concurrently transferred to an instruction queue of each execution unit to concurrently load the cache instructions into the instruction queues of the execution units. The execution units concurrently execute the cache instructions to fetch test instructions from the cache to load into memories of the execution units and execute during the design test mode.Type: GrantFiled: June 7, 2007Date of Patent: September 21, 2010Assignee: Intel CorporationInventors: Allan Wong, Ke Yin, Naveen Matam, Anthony Babella, Wing Hang Wong
-
Publication number: 20080307202Abstract: Provided are a method and system for loading test data into execution units in a graphics card to test the execution units. Test instructions are loaded into a cache in a graphics module comprising multiple execution units coupled to the cache on a bus during a design test mode. The cache instructions are concurrently transferred to an instruction queue of each execution unit to concurrently load the cache instructions into the instruction queues of the execution units. The execution units concurrently execute the cache instructions to fetch test instructions from the cache to load into memories of the execution units and execute during the design test mode.Type: ApplicationFiled: June 7, 2007Publication date: December 11, 2008Inventors: Allan WONG, Ke YIN, Naveen MATAM, Anthony BABELLA, Wing Hang WONG