Coprocessor (e.g., Graphic Accelerator) Patents (Class 345/503)
  • Patent number: 10346941
    Abstract: Systems, computer readable media, and methods for a unified programming interface and language are disclosed. In one embodiment, the unified programming interface and language assists program developers write multi-threaded programs that can perform both graphics and data-parallel compute processing on GPUs. The same GPU programming language model can be used to describe both graphics shaders and compute kernels, and the same data structures and resources may be used for both graphics and compute operations. Developers can use multithreading efficiently to create and submit command buffers in parallel.
    Type: Grant
    Filed: September 30, 2014
    Date of Patent: July 9, 2019
    Assignee: Apple Inc.
    Inventors: Richard W. Schreyer, Kenneth C. Dyke, Alexander K. Kan
  • Patent number: 10310568
    Abstract: A system for the management of rack-mounted field replaceable units (FRUs) that affords the enhanced availability and serviceability of FRUs provided by blade-based systems but in a manner that accommodates different types of FRUs (e.g., in relation to form factors, functionality, power and cooling requirements, and/or the like) installed within a rack or cabinet.
    Type: Grant
    Filed: March 31, 2016
    Date of Patent: June 4, 2019
    Assignee: ORACLE INTERNATIONAL CORPORATION
    Inventors: Thomas E. Stewart, Richard Rogers, Yefim Gelfond, Russell Brovald
  • Patent number: 10297068
    Abstract: The present disclosure describes a new global illumination ray tracing, concentrated at augmented objects of virtual or augmented reality, utilizing the graphics pipeline. Secondary rays are handled in large groups, originating at clusters of primary hit points, and intersecting with scene geometry.
    Type: Grant
    Filed: June 6, 2017
    Date of Patent: May 21, 2019
    Assignee: ADSHIR LTD.
    Inventors: Reuven Bakalash, Amit Porat, Elad Haviv
  • Patent number: 10275275
    Abstract: A copy subsystem within a processor includes a set of logical copy engines and a set of physical copy engines. Each logical copy engine corresponds to a different command stream implemented by a device driver, and each logical copy engine is configured to receive copy commands via the corresponding command stream. When a logical copy engine receives a copy command, the logical copy engine distributes the command, or one or more subcommands derived from the command, to one or more of the physical copy engines. The physical copy engines can perform multiple copy operations in parallel with one another, thereby allowing the bandwidth of the communication link(s) to be saturated.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: April 30, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: M. Wasiur Rashid, Gary Ward, Wei-Je Robert Huang, Philip Browning Johnson
  • Patent number: 10255652
    Abstract: Methods, systems, and computer-readable media for dynamic and application-specific virtualized graphics processing are disclosed. Execution of an application is initiated on a virtual compute instance. The virtual compute instance is implemented using a server. One or more graphics processing unit (GPU) requirements associated with the execution of the application are determined. A physical GPU resource is selected from a pool of available physical GPU resources based at least in part on the one or more GPU requirements. A virtual GPU is attached to the virtual compute instance based at least in part on initiation of the execution of the application. The virtual GPU is implemented using the physical GPU resource selected from the pool and accessible to the server over a network.
    Type: Grant
    Filed: January 18, 2017
    Date of Patent: April 9, 2019
    Assignee: Amazon Technologies, Inc.
    Inventors: Malcolm Featonby, Yuxuan Liu, Umesh Chandani, John Merrill Phillips, Jr., Nicholas Patrick Wilt, Adithya Bhat, Douglas Cotton Kurtz, Mihir Sadruddin Surani
  • Patent number: 10242423
    Abstract: One embodiment provides an accelerator module comprising a memory stack including multiple memory dies; a graphics processing unit (GPU) coupled with the memory stack via one or more memory controllers, the GPU including a plurality of multiprocessors having a single instruction, multiple thread (SIMT) architecture, the multiprocessors to execute at least one single instruction; the at least one single instruction to cause at least a portion of the GPU to perform a floating-point operation on input having differing precisions; and the floating-point operation is a two-dimensional matrix multiply and accumulate operation.
    Type: Grant
    Filed: October 20, 2017
    Date of Patent: March 26, 2019
    Assignee: Intel Corporation
    Inventors: Elmoustapha Ould-Ahmed-Vall, Sara S. Baghsorkhi, Anbang Yao, Kevin Nealis, Xiaoming Chen, Altug Koker, Abhishek R. Appu, John C. Weast, Mike B. Macpherson, Dukhwan Kim, Linda L. Hurd, Ben J. Ashbaugh, Barath Lakshmanan, Liwei Ma, Joydeep Ray, Ping T. Tang, Michael S. Strickland
  • Patent number: 10235792
    Abstract: A tile based graphics processing pipeline comprises a plurality of processing stages, including at least a rasterizer that rasterizes input primitives to generate graphics fragments to be processed, and a renderer that processes fragments generated by the rasterizer to generate rendered fragment data, and a processing stage 6 operable to receive rendered fragment data 3, and to perform a processing operation using the rendered fragment data to generate per-tile metadata 7.
    Type: Grant
    Filed: April 29, 2015
    Date of Patent: March 19, 2019
    Assignee: Arm Limited
    Inventors: Alexis Mather, Sean Ellis
  • Patent number: 10218645
    Abstract: A method in a network node that includes a host and an accelerator, includes holding a work queue that stores work elements, a notifications queue that stores notifications of the work elements, and control indices for adding and removing the work elements and the notifications to and from the work queue and the notifications queue, respectively. The notifications queue resides on the accelerator, and at least some of the control indices reside on the host. Messages are exchanged between a network and the network node using the work queue, the notifications queue and the control indices.
    Type: Grant
    Filed: April 8, 2014
    Date of Patent: February 26, 2019
    Assignee: Mellanox Technologies, Ltd.
    Inventors: Shachar Raindel, Yaniv Saar, Haggai Eran, Yishai Israel Hadas, Ari Zigler
  • Patent number: 10204442
    Abstract: The present disclosure describes a new global illumination ray tracing, concentrated at augmented objects of virtual or augmented reality, utilizing the graphics pipeline. Secondary rays are handled in large groups, originating at clusters of primary hit points, and intersecting with scene geometry.
    Type: Grant
    Filed: July 1, 2017
    Date of Patent: February 12, 2019
    Assignee: ADSHIR LTD.
    Inventors: Reuven Bakalash, Amit Porat, Elad Haviv
  • Patent number: 10191747
    Abstract: A method including fetching a group of instructions, including a group header for the group of instructions, where the group of instructions is configured to execute by a processor, and where the group header includes a field including locking information for at least one operand is provided. The method further includes storing a value of the at least one operand in at least one operand buffer of the processor and based on the locking information, locking a value of the at least one operand in the at least one operand of the buffer such that the at least one operand is not cleared from the at least one operand buffer of the processor in response to completing the execution of the group of instructions.
    Type: Grant
    Filed: June 26, 2015
    Date of Patent: January 29, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Doug Burger
  • Patent number: 10180916
    Abstract: A copy subsystem within a processor includes a set of logical copy engines and a set of physical copy engines. Each logical copy engine corresponds to a different command stream implemented by a device driver, and each logical copy engine is configured to receive copy commands via the corresponding command stream. When a logical copy engine receives a copy command, the logical copy engine distributes the command, or one or more subcommands derived from the command, to one or more of the physical copy engines. The physical copy engines can perform multiple copy operations in parallel with one another, thereby allowing the bandwidth of the communication link(s) to be saturated.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: January 15, 2019
    Assignee: NVIDIA CORPORATION
    Inventors: M. Wasiur Rashid, Gary Ward, Wei-Je Robert Huang, Philip Browning Johnson
  • Patent number: 10127164
    Abstract: A processing device includes: a plurality of processing units that perform processes in accordance with data items read from a memory; a bus that connects the memory to the plurality of processing units; and a traffic monitor that monitors traffic on the bus with respect to the plurality of processing units, and when the traffic for one of the processing units that has been assigned access rights to the memory exceeds or reaches a prescribed upper limit, outputs a signal to the one of the processing units so as to reduce or suspend the traffic for the one of the processing units.
    Type: Grant
    Filed: October 1, 2015
    Date of Patent: November 13, 2018
    Assignee: CASIO COMPUTER CO., LTD.
    Inventor: Hiroaki Nagasaka
  • Patent number: 10067690
    Abstract: A memory system for a network device is described. The memory system includes a packing data buffer including a plurality of memory banks arranged in a plurality of rows and a plurality of columns. The packing data buffer is configured to store incoming data elements of a plurality of widths in the plurality of memory banks. The memory system also includes a free address manager configured to generate an available bank set based on one or more free memory banks in the plurality of memory banks. And, the memory system includes distributed link memory configured to maintain one or more pointers to interconnect a set of one or more memory locations of the one or more memory banks in the packing data buffer to generate at least one list.
    Type: Grant
    Filed: September 30, 2016
    Date of Patent: September 4, 2018
    Assignee: Innovium, Inc.
    Inventors: William Brad Matthews, Bruce H. Kwan, Mohammad K. Issa, Neil Barrett, Avinash Gyanendra Mani
  • Patent number: 10037591
    Abstract: An information processing apparatus that processes a job, the apparatus comprises: a programmable circuit unit configured to configure a logic circuit; and a processing unit configured to process the job in accordance with a job processing request, wherein the processing unit selects, in accordance with a state of the programmable circuit unit, whether to process the job by using the programmable circuit unit being configuring a logic circuit corresponding to the job, or to process the job without using the programmable circuit unit.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: July 31, 2018
    Assignee: Canon Kabushiki Kaisha
    Inventors: Masanori Ichikawa, Tomohiro Tachikawa, Shigeki Hasui, Noboru Yokoyama
  • Patent number: 10013735
    Abstract: A method and manufacture for graphics processing in which a first line of raw Bayer data and a second line of raw Bayer data are received. Each two-by-two array of a plurality of non-overlapping two-by-two arrays of the first line of raw Bayer data and the second line of raw Bayer data is mapped as a separate corresponding texel to provide a plurality of texel. At least one operation is performed on at least one of the plurality of texels.
    Type: Grant
    Filed: August 26, 2015
    Date of Patent: July 3, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Jay Chunsup Yun, Liang Li, Vijay Ganugapati, Xujie Zhang
  • Patent number: 9964993
    Abstract: An information handling system includes a primary integrated display platform and a secondary integrated display platform attached via a hinge, and including a passive cooling system, a dynamic thermal management system, and a processor. The information handling system further includes an application window locator system for determining a location of a software application display window running on the information handling system on the primary integrated display platform or the secondary integrated display platform.
    Type: Grant
    Filed: February 28, 2017
    Date of Patent: May 8, 2018
    Assignee: Dell Products, LP
    Inventors: Travis C. North, Charles D. Hood, III, Lawrence E. Knepper, Deeder M. Aurongzeb, Jorge A. Abullarade
  • Patent number: 9940858
    Abstract: A head-mounted display device including first and second display surfaces associated with first and second eyes of the user, a graphics processing unit, one or more hardware processors, and an adaptive rendering module. The adaptive rendering module is configured to identify a threshold frame time, the threshold frame time representing an upper threshold of time to render frame data by the GPU, receiving a first frame time associated with rendering a first frame to the first eye and second eye of the user, the first frame being rendered at a target resolution, determining that the first frame time exceeds the threshold frame time, and lowering the resolution below the target resolution for parts of a second frame associated with the first eye of the user while maintaining the resolution for parts of the second frame at the target resolution for images associated with the second eye of the user.
    Type: Grant
    Filed: May 16, 2017
    Date of Patent: April 10, 2018
    Assignee: Unity IPR ApS
    Inventor: Juho Henri Rikhard Oravainen
  • Patent number: 9864638
    Abstract: Various embodiments are presented herein that may allow an application direct access to graphical processing unit memory. An apparatus and a computer-implemented method may include accessing allocated graphical processing unit memory of a second resource via a link from a first resource. The allocated graphical processing unit memory may be mapped into one or more page tables of a central processing unit. A virtual address of the graphical processing unit memory from the one or more page tables of the central processing unit may be sent to the application.
    Type: Grant
    Filed: June 22, 2012
    Date of Patent: January 9, 2018
    Assignee: INTEL CORPORATION
    Inventor: Michael Apodaca
  • Patent number: 9837027
    Abstract: The semiconductor device cyclically outputs given status information pieces according to a predetermined procedure from a test output terminal one by one on receipt of a test enable direction through a test enable terminal, and outputs the same status information pieces as those output at that time from the test output terminal without interruption on receipt of a test disable direction. Operating the test enable terminal, the semiconductor device cyclically outputs status information pieces without the need for initial setting and further, outputs only desired status information without interruption.
    Type: Grant
    Filed: December 17, 2015
    Date of Patent: December 5, 2017
    Assignee: Synaptics Japan GK
    Inventors: Akihito Kumamoto, Kazuo Nishimae
  • Patent number: 9800409
    Abstract: Embodiments of an invention for cryptographic key generation using a stored input value and a stored count value have been described. In one embodiment, a processor includes non-volatile storage storing an input value and a count value, and logic to generate a cryptographic key based on the stored input value and the stored count value.
    Type: Grant
    Filed: March 3, 2015
    Date of Patent: October 24, 2017
    Assignee: Intel Corporation
    Inventor: Daniel Nemiroff
  • Patent number: 9779526
    Abstract: A method of determining a coverage area of a pixel covered by a scalable path definition for a character, is disclosed. An edge direction for each edge of the scalable path definition intersecting the pixel is received. A fragment area is determined for each of the intersecting edges, each of the fragment areas representing an area of the pixel located to a side of a corresponding edge. The side of the corresponding edge is selected according to a direction of the corresponding edge. The coverage area of the pixel is determined based on a sum of the fragment areas, the sum of the fragment areas having a value greater than a total area of the pixel.
    Type: Grant
    Filed: November 25, 2013
    Date of Patent: October 3, 2017
    Assignee: Canon Kabushiki Kaisha
    Inventors: Albert Chang, Michael Gerard McCosker
  • Patent number: 9766918
    Abstract: A hypervisor identifies a physical GPU device accessible by the hypervisor to be assigned to a virtual machine and retrieves a GPU device identifier from the physical GPU device. The hypervisor then determines a host bridge device identifier that corresponds to the retrieved GPU device identifier using a mapping table that maps a plurality of GPU device identifiers to a corresponding plurality of host bridge device identifiers.
    Type: Grant
    Filed: February 23, 2015
    Date of Patent: September 19, 2017
    Assignee: Red Hat Israel, Ltd.
    Inventor: Michael S. Tsirkin
  • Patent number: 9761038
    Abstract: Information to be sent over a network, such as the Ethernet, is packetized by using a graphics processing unit (GPU). The GPU performs packetization of data with much higher throughput than a typical central processing unit (CPU). The packetized data may be output through an Ethernet port, video port, or other port of an electronic system.
    Type: Grant
    Filed: November 4, 2014
    Date of Patent: September 12, 2017
    Assignee: Barco, Inc.
    Inventors: Ian Baxter, Chris S. Byrne
  • Patent number: 9747032
    Abstract: A system and method for uniquely identifying a storage device among an array of storage devices of a storage system is provided. In some embodiments, a storage device of the storage system is identified. The storage device may currently lack a name or may have an invalid name. A shelf identifier of a storage device shelf in which the storage device is installed is determined. A stack identifier associated with a connection of the storage device is also determined. The storage system constructs a device name for the storage device based on the shelf identifier and the stack identifier. In some such embodiments, a bay in which the storage device is installed is determined, and the device name is further based on an identifier of the bay. The device name may include the stack identifier, the shelf identifier, and/or the identifier of the bay.
    Type: Grant
    Filed: May 13, 2014
    Date of Patent: August 29, 2017
    Assignee: NetApp, Inc.
    Inventors: Edward Barron, James Silva
  • Patent number: 9715454
    Abstract: The disclosed invention enables the operation of an MIMD type, an SIMD type, or coexistence thereof in a multiprocessor system including a plurality of CPUs and reduces power consumption for instruction fetch by CPUs operating in the SIMD type. A plurality of CPUs and a plurality of memories corresponding thereto are provided. When the CPUs fetch instruction codes of different addresses from the corresponding memories, the CPUs operate independently (operation of the MIMD type). On the other hand, when the CPUs issue requests for fetching an instruction code of a same address from the corresponding memories, that is, operate in the SIMD type, the instruction code read from one of the memories by one access is parallelly supplied to the CPUs.
    Type: Grant
    Filed: July 20, 2015
    Date of Patent: July 25, 2017
    Assignee: RENESAS ELECTRONICS CORPORATION
    Inventor: Masami Nakajima
  • Patent number: 9684364
    Abstract: Technologies for data center power management include a number of computing nodes in communication over a network. Each computing node establishes a firmware environment that monitors power consumption of the computing node and if the power consumption exceeds an optimal level broadcasts a request to offload tasks to the other nodes. The firmware environment of a receiving computing node traps the request and determines power requirements and/or compute requirements for the tasks based on the request. The firmware environment determines whether to accept the offloaded task based on the requirements and available resources of the computing node. If accepted, the requesting computing node offloads one or more tasks to the receiving nodes. The firmware environment may be established by a manageability engine of the computing node. Power consumption may be monitored on a per-component basis. Compute requirements may include processor requirements or other requirements. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 9, 2014
    Date of Patent: June 20, 2017
    Assignee: Intel Corporation
    Inventors: Igor Ljubuncic, Raphael Sack
  • Patent number: 9680647
    Abstract: Disclosed herein are techniques related to predetermining a token for use in a cryptographic system. A method includes providing a memento, mapping the memento to a candidate token according to a rule that updates a parameter, predetermine the token to be the candidate token, if the candidate token meets a test condition according to the rule, identifying a parameter value of the parameter, and providing the memento and the parameter value for future use as an input to re-generate the token. Another method disclosed herein is to re-generate the predetermined token for use in a cryptographic system. The method includes providing a memento associated with the predetermined token, providing a parameter value associated with the predetermined token, and providing a precept for mapping the memento to a candidate token. Further disclosed is instruction code for performing the techniques disclosed herein.
    Type: Grant
    Filed: March 24, 2014
    Date of Patent: June 13, 2017
    Assignee: Infineon Technologies AG
    Inventor: Wieland Fischer
  • Patent number: 9681161
    Abstract: Methods and apparatus for delivering data over extant infrastructure within a content-based network. In one embodiment, the network comprises a cable network, and the infrastructure comprises that nominally used for on-demand (OD) services such as VOD. The method includes the allocation of dedicated end-to-end network resources via a “session request, as well as data flow control and packet size adaptation, by a data server based on feedback from the requesting/receiving client device (e.g., DSTB) within the network. Mechanisms for retransmission requests for error recovery are also provided.
    Type: Grant
    Filed: April 3, 2015
    Date of Patent: June 13, 2017
    Assignee: TIME WARNER CABLE ENTERPRISES LLC
    Inventors: Tom Gonder, Craig Mahonchak, John Carlucci, Vipul Patel, John Callahan, Jay Thomas
  • Patent number: 9665969
    Abstract: One embodiment of the present invention discloses a method for processing video data within a video data processing path of a processing unit. The video data processing path includes three stages. In the first stage, source operands are extracted from a local register file and are ordered to map efficiently onto the downstream data path. In the second stage, arithmetic operations are performed on the source operands based on video processing instructions to generate intermediate results. In the third stage, additional operations are performed on the intermediate results based on the video processing instructions. In some embodiment, the intermediate results are combined with additional operands retrieved from the local register file.
    Type: Grant
    Filed: May 24, 2010
    Date of Patent: May 30, 2017
    Assignee: NVIDIA Corporation
    Inventors: Shirish Gadre, Robert Jan Schutten, David Conrad Tannenbaum
  • Patent number: 9658815
    Abstract: A display processing device includes: a first display processing unit that divides a display image into a first area and a second area and outputs a first output image obtained by performing display processing on display image data of the first area; a second display processing unit that outputs a second output image obtained by performing the display processing on display image data of the second area; a storage unit that temporarily stores the first and second output images; a memory writing control unit that controls writing of the first and second output images to the storage unit; an output selection unit that reads the first and second output images stored in the storage unit and outputs the read first and second output images to a first display device that displays a display image; and a clock control unit that supplies an operation clock to each element.
    Type: Grant
    Filed: April 20, 2015
    Date of Patent: May 23, 2017
    Assignee: OLYMPUS CORPORATION
    Inventors: Ryusuke Tsuchida, Akira Ueno
  • Patent number: 9639273
    Abstract: An approach is provided for representing content data. The cleanup manager determines one or more data types of content associated with a device. Next, the cleanup manager determines effect information regarding one or more effects on one or more resources of the device with respect to the one or more data types. Then, the cleanup manager presents one or more representations of the one or more data types, wherein the one or more representations are based, at least in part, on the effect information.
    Type: Grant
    Filed: March 17, 2011
    Date of Patent: May 2, 2017
    Assignee: Nokia Technologies Oy
    Inventors: Ari-Pekka Hirvonen, Lauri Rauhanen, Aapo Matias Hasu, Jari Tapio Ijäs, Rit Mishra, Jonatan Hedberg
  • Patent number: 9619008
    Abstract: An information handling system includes a primary integrated display platform and a secondary integrated display platform attached via a hinge, and including a passive cooling system, a dynamic thermal management system, and a processor. The information handling system further includes an application window locator system for determining a location of a software application display window running on the information handling system on the primary integrated display platform or the secondary integrated display platform.
    Type: Grant
    Filed: August 15, 2014
    Date of Patent: April 11, 2017
    Assignee: Dell Products, LP
    Inventors: Travis C. North, Charles D. Hood, III, Lawrence E. Knepper, Deeder M. Aurongzeb, Jorge A. Abullarade
  • Patent number: 9569160
    Abstract: A display processing device includes: a first display processing unit that outputs image data of a first output image obtained by performing display processing on display image data of an odd column of a display image; a second display processing unit that outputs image data of a second output image obtained by performing the display processing on display image data of an even column of the display image; an output selection unit that selects the image data of the first output image or the image data of the second output image and outputs the selected image data to a first display device that displays a display image; and a clock control unit that supplies an operation clock required when the respective elements operate.
    Type: Grant
    Filed: April 10, 2015
    Date of Patent: February 14, 2017
    Assignee: OLYMPUS CORPORATION
    Inventors: Ryusuke Tsuchida, Akira Ueno
  • Patent number: 9563778
    Abstract: A method is provided for managing public and private data input by a device such as a mobile handset, a personal digital assistant, a personal computer and an electronic tablet. Method provides for separating public and private data such that public data can be operated on by open operating system and private data is either encrypted while in the open operating environment but can be operated on and used when received by the secure operating environment.
    Type: Grant
    Filed: October 26, 2012
    Date of Patent: February 7, 2017
    Assignee: ST-Ericsson SA
    Inventors: Herve Sibert, Nicolas Anquet
  • Patent number: 9564108
    Abstract: A method for rendering video frames by a computing device having a software stack with an application layer and a kernel layer comprises various steps. First, a system reference time is initialized. A triggering of an interrupt signal in the kernel layer is waited for. Next, it is determined whether to update the system reference time as a function of a render function from the application layer. A next video frame in the kernel layer is rendered by the computing device as a function of the determined system reference time and the next video frame. The steps after the initializing step and starting at the waiting step are recursively performed.
    Type: Grant
    Filed: October 20, 2014
    Date of Patent: February 7, 2017
    Assignee: Amlogic Co., Limited
    Inventor: Ting Yao
  • Patent number: 9405550
    Abstract: An apparatus and method of submitting hardware accelerator engine commands over an interconnect link such as a PCI Express (PCIe) link. In one embodiment, the mechanism is implemented inside a PCIe Host Bridge which is integrated into a host IC or chipset. The mechanism provides an interface compatible with other integrated accelerators thereby eliminating the overhead of maintaining different programming models for local and remote accelerators. Co-processor requests issued by threads requesting a service (client threads) targeting a remote accelerator are queued and sent to a PCIe adapter and remote accelerator engine over a PCIe link. The remote accelerator engine performs the requested processing task, delivers results back to host memory and the PCIe Host Bridge performs a co-processor request completion sequence (status update, write to flag, interrupt) included in the co-processor command.
    Type: Grant
    Filed: March 31, 2011
    Date of Patent: August 2, 2016
    Assignee: International Business Machines Corporation
    Inventors: Giora Biran, Ilya Granovsky
  • Patent number: 9256915
    Abstract: The techniques are generally related to management of buffers with a management unit that resides within an integrated circuit that includes a graphics processing unit (GPU). The management unit may ensure proper access to the buffers by the programmable compute units of the GPU to allow the GPU to execute kernels on the programmable compute units in a pipeline fashion.
    Type: Grant
    Filed: January 23, 2013
    Date of Patent: February 9, 2016
    Assignee: QUALCOMM Incorporated
    Inventors: Alexei V. Bourd, Vineet Goel
  • Patent number: 9229526
    Abstract: A dedicated application-specific integrated circuit (ASIC) is described that can be integrated into a mobile device (e.g., a mobile phone, tablet computer). The dedicated ASIC can provide an embedded low-power micro-controller to offload machine vision processing and other image processing from an application processor (AP) of the mobile device. Effectively, the offloading of image processing can enable the mobile device to save battery life and improve performance by utilizing lower speed buses and lower power consumption than would otherwise be consumed if the AP were to be utilized. In various embodiments, the ASIC can be used to either connect a single camera or multiple synchronized cameras depending on the application.
    Type: Grant
    Filed: September 10, 2012
    Date of Patent: January 5, 2016
    Assignee: AMAZON TECHNOLOGIES, INC.
    Inventors: Smita Neglur, Leo B. Baldwin, Aleksandar Pance
  • Patent number: 9177412
    Abstract: Techniques for multiple pass rendering include receiving vertex data for one or more objects to be enhanced. Parameters in a display list may be determined using the vertex data. Multiple pixel rendering passes may be run using the parameters in the display list. An enhanced depiction of the one or more objects may be rendered based on the multiple pixel rendering passes. Other embodiments are described and claimed.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: November 3, 2015
    Assignee: INTEL CORPORATION
    Inventors: Xianchao Xu, Lili Gong
  • Patent number: 9131021
    Abstract: Illustrative embodiments disclose sharing an area of a computer system screen. A first computer system configures a sharing session for sharing a region of the screen with a second computer system. The first computer system assesses information on performance of the sharing session, determining from the information a minimum size of the region based on the assessment, and then selects the region to share based on the assessment and a designation by a user.
    Type: Grant
    Filed: November 16, 2013
    Date of Patent: September 8, 2015
    Assignee: International Business Machines Corporation
    Inventors: Kulvir S. Bhogal, Gregory J. Boss, Rick A. Hamilton, II, Anne R. Sand
  • Patent number: 9128866
    Abstract: Systems and methods may provide for using audio output device driver logic to maintain one or more states of an audio accelerator in a memory store, detect a suspend event, and deactivate the audio accelerator in response to the suspend event. In addition, firmware logic of the audio accelerator may be used to detect a resume event with respect to the audio output accelerator, and retrieve one or more states of the audio accelerator directly from the memory store in response to the resume. Thus, the retrieval of the one or more states can bypass the driver logic.
    Type: Grant
    Filed: December 30, 2011
    Date of Patent: September 8, 2015
    Assignee: Intel Corporation
    Inventors: Shoumeng Yan, Xiaocheng Zhou, Lomesh Agarwal
  • Patent number: 9124657
    Abstract: Illustrative embodiments disclose sharing an area of a computer system screen. A first computer system configures a sharing session for sharing a region of the screen with a second computer system. The first computer system assesses information on performance of the sharing session, determining from the information a minimum size of the region based on the assessment, and then selects the region to share based on the assessment and a designation by a user.
    Type: Grant
    Filed: December 14, 2011
    Date of Patent: September 1, 2015
    Assignee: International Business Machines Corporation
    Inventors: Kulvir S. Bhogal, Gregory J. Boss, Rick A. Hamilton, II, Anne R. Sand
  • Patent number: 9105208
    Abstract: A method and apparatus for graphic processing using multi-threading includes at least one context task, mediation task, and control task executed by a processor. The at least one context task sequentially generates graphic commands. The mediation task mediates processing of the graphic commands. The mediation task may process a particular graphic command on behalf of the at least one context task, and change a processing order of the graphic commands. The control task transmits the graphic commands to a graphic hardware.
    Type: Grant
    Filed: November 6, 2012
    Date of Patent: August 11, 2015
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Sung Jin Son, Sang Oak Woo, Seok Yoon Jung, Vladislav Y. Aranov, Sergey Y. Belyaev, Pavel O. Smirnov
  • Patent number: 9087450
    Abstract: A preexisting FMS system may be upgraded to increase its functionality while still taking advantage of certain components of the legacy system previously provided on the aircraft and replacing other preexisting components with different components for enhancing the functionality of the FMS system. The preexisting IRU, CADC, DME receiver and DFGC in the upgraded FMS system are in communication with the legacy AFMC but, instead of employing the legacy EFIS which existed in the prrexisting FMS system, the EFIS is replaced by a data concentrator unit as well as the display control panel and integrated flat panel display, and a GPS receiver. The upgraded FMS system is capable of such increased functionality as increased navigation database storage capacity, RNP, VNAV and RNAV capability utilizing a GPS based navigation solution, and RTA capability, while still enabling the legacy AFMC to exploit its aircraft performance capabilities throughout the flight.
    Type: Grant
    Filed: May 17, 2011
    Date of Patent: July 21, 2015
    Assignee: Innovative Solutions and Support, Inc.
    Inventors: Geoffrey S. M. Hedrick, Shahram Askarpour, Markus Knopf
  • Patent number: 9063713
    Abstract: Methods and apparatuses are disclosed that may provide graphics controllers with increased thermal granularity. The graphics controller may comprise a display memory, at least one display engine coupled to the display memory, and at least one execution unit coupled to the display memory, where the at least one execution unit may begin an idle period that varies based upon a thermal event.
    Type: Grant
    Filed: October 28, 2008
    Date of Patent: June 23, 2015
    Assignee: Apple Inc.
    Inventor: Anthony Graham Sumpter
  • Patent number: 9058676
    Abstract: In an embodiment, a display pipe is configured to composite one or more frames of images and/or video sequences to generate output frames for display. Additionally, the display pipe may be configured to compress an output frame and write the compressed frame to memory responsive to detecting static content in the output frames. The display pipe may also be configured to read the compressed frame from memory for display instead of reading the frames for compositing and display. In some embodiments, the display pipe may include an idle screen detect circuit configured to monitor the operation of the display pipe and/or the output frames to detect the static content.
    Type: Grant
    Filed: March 26, 2013
    Date of Patent: June 16, 2015
    Assignee: Apple Inc.
    Inventors: Brijesh Tripathi, Peter F. Holland, Albert C. Kuo
  • Patent number: 9035956
    Abstract: In an embodiment, a processor that includes multiple cores may implement a power/performance-efficient stop mechanism for power gating. One or more first cores of the multiple cores may have a higher latency stop than one or more second cores of the multiple cores. The power control mechanism may permit continued dispatching of work to the second cores until the first cores have stopped. The power control mechanism may prevent dispatch of additional work once the first cores have stopped, and may power gate the processing in response to the stopping of the second cores. Stopping a core may include one or more of: requesting a context switch from the core or preventing additional work from being dispatched to the core and permitting current work to complete normally. In an embodiment, the processor may be a graphics processing unit (GPU).
    Type: Grant
    Filed: May 8, 2012
    Date of Patent: May 19, 2015
    Assignee: Apple Inc.
    Inventors: Richard W. Schreyer, Jason P. Jane, Michael J. E. Swift, Gokhan Avkarogullari, Luc R. Semeria
  • Patent number: 9037654
    Abstract: The present invention discloses a method system for transmitting a document over a Network including the steps of a document sender converts a sharing document to be transmitted into a GDI (Graph Device Interface) document by performing virtual printing. The document receiver receives the graph device interface document sent from the document sender through the network The document receiver restores the received GDI document. The contents of the restored GDI document are the same as that of the sharing document. The present invention also provides a system, a virtual printer apparatus and a restoration apparatus, the transmission of the document is not restricted by the application using the method, system and apparatus of the present invention.
    Type: Grant
    Filed: December 15, 2005
    Date of Patent: May 19, 2015
    Assignee: HUAWEI TECHNOLOGIES CO., LTD.
    Inventor: Haijun Wu
  • Publication number: 20150116334
    Abstract: A method for the selective utilization of graphics processing unit (GPU) acceleration of database queries in database management is provided. The method includes receiving a database query in a database management system executing in memory of a host computing system. The method also includes estimating a time to complete processing of one or more operations of the database query using GPU accelerated computing in a GPU and also a time to complete processing of the operations using central processor unit (CPU) sequential computing of a CPU. Finally, the method includes routing the operations for processing using GPU accelerated computing if the estimated time to complete processing of the operations using GPU accelerated computing is less than an estimated time to complete processing of the operations using CPU sequential computing, but otherwise routing the operations for processing using CPU sequential computing.
    Type: Application
    Filed: March 28, 2014
    Publication date: April 30, 2015
    Applicant: International Business Machines Corporation
    Inventor: Norio Nagai
  • Patent number: 9019283
    Abstract: A software engine for decomposing work to be done into tasks, and distributing the tasks to multiple, independent CPUs for execution is described. The engine utilizes dynamic code generation, with run-time specialization of variables, to achieve high performance. Problems are decomposed according to methods that enhance parallel CPU operation, and provide better opportunities for specialization and optimization of dynamically generated code. A specific application of this engine, a software three dimensional (3D) graphical image renderer, is described.
    Type: Grant
    Filed: August 29, 2012
    Date of Patent: April 28, 2015
    Assignee: Transgaming Inc.
    Inventors: Gavriel State, Nicolas Capens, Luther Johnson