Patents by Inventor Boris Prokopenko
Boris Prokopenko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10593111Abstract: A method, a system, and a computer-readable storage medium directed to performing high-speed parallel tessellation of 3D surface patches are disclosed. The method includes generating a plurality of primitives in parallel. Each primitive in the plurality is generated by a sequence of functional blocks, in which each sequence acts independently of all the other sequences.Type: GrantFiled: August 7, 2018Date of Patent: March 17, 2020Assignee: Advanced Micro Devices, Inc.Inventors: Timour T. Paltashev, Boris Prokopenko, Vladimir V. Kibardin
-
Publication number: 20180342099Abstract: A method, a system, and a computer-readable storage medium directed to performing high-speed parallel tessellation of 3D surface patches are disclosed. The method includes generating a plurality of primitives in parallel. Each primitive in the plurality is generated by a sequence of functional blocks, in which each sequence acts independently of all the other sequences.Type: ApplicationFiled: August 7, 2018Publication date: November 29, 2018Applicant: Advanced Micro Devices, Inc.Inventors: Timour T. Paltashev, Boris Prokopenko, Vladimir V. Kibardin
-
Patent number: 10068372Abstract: A method, a system, and a computer-readable storage medium directed to performing high-speed parallel tessellation of 3D surface patches are disclosed. The method includes generating a plurality of primitives in parallel. Each primitive in the plurality is generated by a sequence of functional blocks, in which each sequence acts independently of all the other sequences.Type: GrantFiled: December 30, 2015Date of Patent: September 4, 2018Assignee: Advanced Micro Devices, Inc.Inventors: Timour T. Paltashev, Boris Prokopenko, Vladimir V. Kibardin
-
Patent number: 10062206Abstract: A parallel adaptable graphics rasterization system in which a primitive assembler includes a router to selectively route a primitive to a first rasterizer or one of a plurality of second rasterizers. The second rasterizers concurrently operate on different primitives and the primitive is selectively routed based on an area of the primitive. In some variations, a bounding box of the primitive is reduced to a predetermined number of pixels prior to providing the primitive to the one of the plurality of second rasterizers. Reducing the bounding box can include subtracting an origin of the bounding box from coordinates of points that represent the primitive.Type: GrantFiled: August 30, 2016Date of Patent: August 28, 2018Assignee: Advanced Micro Devices, Inc.Inventors: Boris Prokopenko, Timour T. Paltashev, Vladimir V. Kibardin
-
Publication number: 20180061124Abstract: A parallel adaptable graphics rasterization system in which a primitive assembler includes a router to selectively route a primitive to a first rasterizer or one of a plurality of second rasterizers. The second rasterizers concurrently operate on different primitives and the primitive is selectively routed based on an area of the primitive. In some variations, a bounding box of the primitive is reduced to a predetermined number of pixels prior to providing the primitive to the one of the plurality of second rasterizers. Reducing the bounding box can include subtracting an origin of the bounding box from coordinates of points that represent the primitive.Type: ApplicationFiled: August 30, 2016Publication date: March 1, 2018Inventors: Boris Prokopenko, Timour T. Paltashev, Vladimir V. Kibardin
-
Publication number: 20170193697Abstract: A method, a system, and a computer-readable storage medium directed to performing high-speed parallel tessellation of 3D surface patches are disclosed. The method includes generating a plurality of primitives in parallel. Each primitive in the plurality is generated by a sequence of functional blocks, in which each sequence acts independently of all the other sequences.Type: ApplicationFiled: December 30, 2015Publication date: July 6, 2017Applicant: Advanced Micro Devices, Inc.Inventors: Timour T. Paltashev, Boris Prokopenko, Vladimir V. Kibardin
-
Patent number: 8368701Abstract: Included are embodiments of systems and methods for processing metacommands. In at least one exemplary embodiment a Graphics Processing Unit (GPU) includes a metaprocessor configured to process at least one context register, the metaprocessor including context management logic and a metaprocessor control register block coupled to the metaprocessor, the metaprocessor control register block configured to receive metaprocessor configuration data, the metaprocessor control register block further configured to define metacommand execution logic block behavior. Some embodiments include a Bus Interface Unit (BIU) configured to provide the access from a system processor to the metaprocessor and a GPU command stream processor configured to fetch a current context command stream and send commands for execution to a GPU pipeline and metaprocessor.Type: GrantFiled: November 6, 2008Date of Patent: February 5, 2013Assignee: Via Technologies, Inc.Inventors: Timour Paltashev, Boris Prokopenko, John Brothers
-
Patent number: 8082426Abstract: Included are systems and methods for supporting a plurality of Graphics Processing Units (GPUs). At least one embodiment of a system includes a context status register configured to send data related to a status of at least one context and a context switch configuration register configured to send instructions related to at least one event for the at least one context. At least one embodiment of a system includes a context status management component coupled to the context status register and the context switch configuration register.Type: GrantFiled: November 6, 2008Date of Patent: December 20, 2011Assignee: Via Technologies, Inc.Inventors: Timour Paltashev, Boris Prokopenko, John Brothers
-
Patent number: 8024394Abstract: Included are embodiments of a Multiply-Accumulate Unit to process multiple format floating point operands. For short format operands, embodiments of the Multiply Accumulate Unit are configured to process data with twice the throughput as long and mixed format data. At least one embodiment can include a short exponent calculation component configured to receive short format data, a long exponent calculation component configured to receive long format data, and a mixed exponent calculation component configured to receive short exponent data, the mixed exponent calculation component further configured to received long format data. Embodiments also include a mantissa datapath configured for implementation to accommodate processing of long, mixed, and short floating point operands.Type: GrantFiled: February 6, 2007Date of Patent: September 20, 2011Assignee: Via Technologies, Inc.Inventors: Boris Prokopenko, Timour Paltashev, Derek Gladding
-
Publication number: 20110208946Abstract: Disclosed are various embodiments of a stream processing unit for single instruction multiple data (SIMD) processing, wherein the stream processing unit executes a stage of a Multiply-Accumulate calculation. In one embodiment, the stream processing unit comprises a plurality of scalar arithmetic logic units (ALUs) configured to receive data having a plurality of data types. The number and type of scalar ALUs corresponds to an SIMD factor. In one embodiment, the scalar ALUs are executed sequentially with a delay being introduced in between execution of each of the scalar ALUs, wherein the delay corresponds to the SIMD factor.Type: ApplicationFiled: May 4, 2011Publication date: August 25, 2011Applicant: VIA TECHNOLOGIES, INC.Inventors: Boris Prokopenko, Timour Paltashev, Derek Gladding
-
Patent number: 8004533Abstract: A command parser in a GPU is configured to schedule execution of received commands and includes a first input coupled to a scheduler. The first command parser input is configured to communicate bus interface commands to the command parser for execution. A second command parser input is coupled to a controller that receives ring buffer commands from the scheduler in association with a new or previously-partially executed ring buffer, or context, which are executed by the command parser. A third command parser input coupled to a command DMA component that receives DMA commands from the controller that are also contained in the new or previously-partially executed ring buffer, which are forwarded to the command parser for execution. The command parser forwards data corresponding to commands received on one or more the first, second, and third inputs via one or more outputs.Type: GrantFiled: September 8, 2006Date of Patent: August 23, 2011Assignee: VIA Technologies, Inc.Inventors: Hsilin Huang, Boris Prokopenko, John Brothers
-
Patent number: 7898550Abstract: Various embodiments for reducing external bandwidth requirements for transferring graphics data are included. One embodiment includes a system for reducing the external bandwidth requirements for transferring graphics data comprising a prediction error calculator configured to generate a prediction error matrix for a pixel tile of z-coordinate data, a bit length calculator configured to calculate the number of bits needed to store the prediction error matrix, a data encoder configured to encode the prediction error matrix into a compressed block and a packer configured to shift the compressed block in a single operation to an external memory location.Type: GrantFiled: May 17, 2007Date of Patent: March 1, 2011Assignee: VIA Technologies, Inc.Inventors: Boris Prokopenko, Timou Paltashev
-
Patent number: 7755632Abstract: A GPU pipeline is synchronized by sending a fence command from a first module to an addressed synchronization register pair. Fence command associated data may be stored in a fence register of the addressed register pair. A second module sends a wait command with associated data to the addressed register pair, which may be compared to the data in the fence register. If the fence register data is greater than or equal to the wait command associated data, the second module may be acknowledged for sending the wait command and released for processing other graphics operations. If the fence register data is less than the wait command associated data, the second module is stalled until subsequent receipt of a fence command having data that is greater than or equal to the wait command associated data, which may be written to a wait register associated to the addressed register pair.Type: GrantFiled: October 25, 2006Date of Patent: July 13, 2010Assignee: VIA Technologies, Inc.Inventors: John Brothers, Hsilin Huang, Boris Prokopenko
-
Patent number: 7737983Abstract: A method for high level synchronization between an application and a graphics pipeline comprises receiving an application instruction in an input stream at a predetermined component, such as a command stream processor (CSP), as sent by a central processing unit. The CSP may have a first portion coupled to a next component in the graphics pipeline and a second portion coupled to a plurality of components of the graphics pipeline. A command associated with the application instruction may be forwarded from the first portion to the next component in the graphics pipeline or some other component coupled thereto. The command may be received and thereafter executed. A response may be communicated on a feedback path to the second portion of the CSP. Nonlimiting exemplary application instructions that may be received and executed by the CSP include check surface fault, trap, wait, signal, stall, flip, and trigger.Type: GrantFiled: October 25, 2006Date of Patent: June 15, 2010Assignee: Via Technologies, Inc.Inventors: John Brothers, Timour Paltashev, Hsilin Huang, Boris Prokopenko, Qunfeng (Fred) Liao
-
Publication number: 20100110089Abstract: Included are systems and methods for Graphics Processing Unit (GPU) synchronization. At least one embodiment of a system includes at least one producer GPU configured to receive data related to at least one context, the at least one producer GPU further configured to process at least a portion of the received data. Some embodiments include at least one consumer GPU configured to received data from the producer GPU, the consumer GPU further configured to stall execution of the received data until a fence value is received.Type: ApplicationFiled: November 6, 2008Publication date: May 6, 2010Applicant: VIA Technologies, Inc.Inventors: Timour Paltashev, Boris Prokopenko, John Brothers
-
Publication number: 20100115249Abstract: Included are systems and methods for supporting a plurality of Graphics Processing Units (GPUs). At least one embodiment of a system includes a context status register configured to send data related to a status of at least one context and a context switch configuration register configured to send instructions related to at least one event for the at least one context. At least one embodiment of a system includes a context status management component coupled to the context status register and the context switch configuration register.Type: ApplicationFiled: November 6, 2008Publication date: May 6, 2010Applicant: VIA TECHNOLOGIES, INC.Inventors: Timour Paltashev, Boris Prokopenko, John Brothers
-
Publication number: 20100110083Abstract: Included are embodiments of systems and methods for processing metacommands. In at least one exemplary embodiment a Graphics Processing Unit (GPU) includes a metaprocessor configured to process at least one context register, the metaprocessor including context management logic and a metaprocessor control register block coupled to the metaprocessor, the metaprocessor control register block configured to receive metaprocessor configuration data, the metaprocessor control register block further configured to define metacommand execution logic block behavior. Some embodiments include a Bus Interface Unit (BIU) configured to provide the access from a system processor to the metaprocessor and a GPU command stream processor configured to fetch a current context command stream and send commands for execution to a GPU pipeline and metaprocessor.Type: ApplicationFiled: November 6, 2008Publication date: May 6, 2010Applicant: VIA TECHNOLOGIES, INC.Inventors: Timour Paltashev, Boris Prokopenko, John Brothers
-
Patent number: 7696993Abstract: An input stream of graphics primitives may be converted into to a predetermined output stream of graphics primitives by a processor in a graphics pipeline. The processor recognizes a predetermined sequence pattern in the input stream of graphics primitives to the processor. The processor determines whether the recognized sequence pattern can be converted into the one of the plurality of predetermined output streams of graphics primitives. If so, the processor identifies a number of vertices in the recognized sequence pattern and reorders the vertices into a predetermined output pattern. Thereafter, the processor outputs the predetermined output pattern corresponding to one or more graphics processing components.Type: GrantFiled: February 8, 2007Date of Patent: April 13, 2010Assignee: VIA Technologies, Inc.Inventors: Boris Prokopenko, Hsilin (Stephen) Huang, Ping Chen
-
Patent number: 7675521Abstract: Systems for performing rasterization are described. At least one embodiment includes a span generator for performing rasterization. In accordance with such embodiments, the span generator comprises functionals representing a scissoring box, loaders configured to convert the functionals from a general form to a special case form, edge generators configured to read the special case form of the scissoring box, whereby the special case form simplifies calculations by the edge generators. The span generator further comprises sorters configured to compute the intersection of half-planes, wherein edges of the intersection are generated by the edge generators and a span buffer configured to temporarily store spans before tiling.Type: GrantFiled: March 11, 2008Date of Patent: March 9, 2010Assignee: VIA Technologies, Inc.Inventors: Konstantine Iourcha, Boris Prokopenko, Timour Paltashev, Derek Gladding
-
Patent number: 7659898Abstract: A dynamically scheduled parallel graphics processor comprises a spreader that creates graphic objects for processing and assigns and distributes the created objects for processing to one or more execution blocks. Each execution block is coupled to the spreader and receives an assignment for processing a graphics object. The execution block pushes the object through each processing stage by scheduling the processing of the graphics object and executing instruction operations on the graphics object. The dynamically scheduled parallel graphics processor includes one or more fixed function units coupled to the spreader that are configured to execute one or more predetermined operations on a graphics object. An input/output unit is coupled to the spreader, the one or more fixed function units, and the plurality of execution blocks and is configured to provide access to memory external to the dynamically scheduled parallel graphics processor.Type: GrantFiled: August 8, 2005Date of Patent: February 9, 2010Assignee: VIA Technologies, Inc.Inventors: Boris Prokopenko, Timour Paltashev