Address Translation (e.g., Between Virtual And Physical Addresses) Patents (Class 345/568)
  • Patent number: 12058094
    Abstract: A method is described that enables communication between two disjoined networks with overlapping IP address ranges. The method features receiving a first address mapping query message from a first intermediary device and returning a first private IP address map. The first private IP address map includes at least a first plurality of private IP addresses each uniquely assigned to a computing device residing in the first network. In response to a triggering event, recovering a second private IP address map by a second intermediary device. Herein, the second private IP address map includes at least a second plurality of private IP addresses each uniquely assigned to a computing device residing in the second network.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: August 6, 2024
    Assignee: Aviatrix Systems, Inc.
    Inventors: Xiaobo Sherry Wei, Pankaj Manglik, Sunil Kishen
  • Patent number: 12045617
    Abstract: Software instructions are executed on a processor within a computer system to configure a steaming engine with stream parameters to define a multidimensional array. The stream parameters define a size for each dimension of the multidimensional array and a specified width for two selected dimensions of the array. Data is fetched from a memory coupled to the streaming engine responsive to the stream parameters. A stream of vectors is formed for the multidimensional array responsive to the stream parameters from the data fetched from memory. When either selected dimension in the stream of vectors exceeds a respective specified width, the streaming engine inserts null elements into each portion of a respective vector for the selected dimension that exceeds the specified width in the stream of vectors. Stream vectors that are completely null are formed by the streaming engine without accessing the system memory for respective data.
    Type: Grant
    Filed: February 14, 2022
    Date of Patent: July 23, 2024
    Assignee: Texas Instruments Incorporated
    Inventors: William Franklin Leven, Asheesh Bhardwaj, Son Hung Tran, Timothy David Anderson
  • Patent number: 11989133
    Abstract: Methods, systems, and devices for logical-to-physical (L2P) mapping compression techniques are described. A memory system may use an L2P mapping to map logical addresses to physical addresses of the memory system. The L2P mapping may be a hierarchical L2P mapping divided into multiple levels or subsets that are used to identify a physical address corresponding to a logical address. The memory system may write data to a set of physical addresses that are consecutively indexed and may set a flag in an entry of a second-level of the L2P mapping (e.g., of a three-level L2P mapping) to indicate that the entry is associated with a starting physical address of the consecutively indexed physical addresses. The memory system may subsequently read the data starting at the starting physical address based on the flag (e.g., bypassing reading an entry of a lowest-level of the L2P mapping to determine the physical address).
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: May 21, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Xing Wang, Liping Xu, Xu Zhang, Zhen Gu
  • Patent number: 11977499
    Abstract: Generally disclosed herein is a hardware/software interface for asynchronous data movement between an off-core memory and a core-local memory, referred to as “stream transfers”, and a stream ordering model. The stream transfers allow software to more efficiently express common data-movement patterns, specifically ones seen in sparse workloads. Direct stream instructions that belong to a stream are processed in-order. For indirect stream instructions, offset elements in an offset list are processed in order. A sync flag is updated to indicate monotonic incremental progress for the stream.
    Type: Grant
    Filed: April 18, 2022
    Date of Patent: May 7, 2024
    Assignee: Google LLC
    Inventors: Rahul Nagarajan, Arpith Chacko Jacob, Suvinay Subramanian, Hema Hariharan
  • Patent number: 11768781
    Abstract: An apparatus and method are described for implementing memory management in a graphics processing system. For example, one embodiment of an apparatus comprises: a first plurality of graphics processing resources to execute graphics commands and process graphics data; a first memory management unit (MMU) to communicatively couple the first plurality of graphics processing resources to a system-level MMU to access a system memory; a second plurality of graphics processing resources to execute graphics commands and process graphics data; a second MMU to communicatively couple the second plurality of graphics processing resources to the first MMU; wherein the first MMU is configured as a master MMU having a direct connection to the system-level MMU and the second MMU comprises a slave MMU configured to send memory transactions to the first MMU, the first MMU either servicing a memory transaction or sending the memory transaction to the system-level MMU on behalf of the second MMU.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: September 26, 2023
    Assignee: Intel Corporation
    Inventors: Niranjan L. Cooray, Abhishek R. Appu, Altug Koker, Joydeep Ray, Balaji Vembu, Pattabhiraman K, David Puffer, David J. Cowperthwaite, Rajesh M. Sankaran, Satyeshwar Singh, Sameer Kp, Ankur N. Shah, Kun Tian
  • Patent number: 11748130
    Abstract: Graphics processing systems and methods are described. A graphics processing apparatus may comprise one or more graphics processing engines, a memory, a memory management unit (MMU) including a GPU second level page table and GPU dirty bit tracking, and a provisioning agent to receive a request from a virtual machine monitor (VMM) to provision a subcluster of graphics processing apparatuses, the subcluster including a plurality of graphics processing engines from a plurality of graphics processing apparatuses connected using a scale-up fabric, provision the scale-up fabric to route data within the subcluster of graphics processing apparatuses, and provision a plurality of resources on the graphics processing apparatus for the subcluster based on the request from the VMM.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 5, 2023
    Assignee: INTEL CORPORATION
    Inventors: Rajesh Sankaran, Bret Toll, William Rash, Subramaniam Maiyuran, Gang Chen, Varghese George
  • Patent number: 11681622
    Abstract: Described herein is a memory architecture that is configured to dynamically determine an address encoding to use to encode multi-dimensional data such as multi-coordinate data in a manner that provides a coordinate bias corresponding to a current memory access pattern. The address encoding may be dynamically generated in response to receiving a memory access request or may be selected from a set of preconfigured address encodings. The dynamically generated or selected address encoding may apply an interleaving technique to bit representations of coordinate values to obtain an encoded memory address. The interleaving technique may interleave a greater number of bits from the bit representation corresponding to the coordinate direction in which a coordinate bias is desired than from bit representations corresponding to other coordinate directions.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: June 20, 2023
    Assignee: Pony AI Inc.
    Inventors: Yubo Zhang, Pingfan Meng
  • Patent number: 11673469
    Abstract: A method that simulates effects of displaying assets using a graphical processing unit (GPU) is provided. The method includes extracting preprocessed assets, the assets having been preprocessed offline to provide simulated GPU graphical effects, isolating dynamic assets from static assets from the preprocessed assets, calculating a bounding-box for each of the dynamic assets, alpha-blending the static assets, alpha-blending the dynamic assets, and rendering the static assets and the dynamic assets to separate display layers at different frequencies.
    Type: Grant
    Filed: April 20, 2021
    Date of Patent: June 13, 2023
    Assignee: TEXAS INSTRUMENTS INCORPORATED
    Inventors: Subhajit Paul, Nikhil Nandkishor Devshatwar, Santhana Bharathi N, Shravan Karthik
  • Patent number: 11659270
    Abstract: An imaging device and a horizontal direction detection method capable of detecting a horizontal angle of a camera with high accuracy in a simple configuration are provided. The imaging device includes an imaging unit configured to obtain image data by photographing a predetermined subject, an image rotation unit configured to cause a display image based on the image data to be rotated on a display plane step by step, a count unit configured to count the number of pixels of a specific color included in the display image in a scanning line direction within the display plane and obtain a count value for each of rotated display images, and a determination unit configured to determine a horizontal direction of a photographing angle of the imaging unit based on the count value for each of the rotated display images.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: May 23, 2023
    Assignee: LAPIS Semiconductor Co., Ltd.
    Inventor: Yuki Imatoh
  • Patent number: 11574381
    Abstract: Embodiments are generally directed to methods and apparatuses for buffer sharing. An embodiment of a method comprises: receiving a plurality of graphics data comprising a first graphics data, each of the plurality of graphics data mapped to a corresponding buffer in a Graphics Processing Unit (GPU) memory, wherein the first graphics data is mapped to a first buffer in the GPU memory; receiving a second graphics data mapped to a second buffer in the GPU memory; comparing the first buffer mapped by the first graphics data with the second buffer mapped by the second graphics data; and remapping the second graphics data to the first buffer if the first buffer is identical with the second buffer.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: February 7, 2023
    Assignee: INTEL CORPORATION
    Inventors: Zhifang Long, Yejun Guo, Jiang Ji, Yu Wang, Wenju He
  • Patent number: 11475973
    Abstract: A system and method for virtually addressing an array of accelerator tiles of a mixed-signal integrated circuit includes testing each of a plurality of distinct matrix multiply accelerator (MMA) tiles of a grid of MMA tiles, the grid of MMA tiles being defined by the plurality of distinct grid of MMA tiles being arranged in a plurality of rows and a plurality of columns along an integrated circuit, each of the plurality of distinct MMA tiles within the grid of MMA tiles having a distinct physical address on the integrated circuit; identifying one or more defective MMA tiles within the grid of MMA tiles based on the testing; and configuring the grid of MMA tiles with a plurality of virtual addresses for routing data to or routing data from one or more non-defective MMA tiles of grid of MMA tiles based on identifying the one or more defective MMA tiles.
    Type: Grant
    Filed: May 26, 2021
    Date of Patent: October 18, 2022
    Assignee: Mythic, Inc.
    Inventors: Malav Parikh, Zainab Nasreen Zaidi, Sergio Schuler, Natarajan Seshan, Raul A. Garibay, Jr., David Fick
  • Patent number: 11425559
    Abstract: Embodiments of a data transmission network device and methods of operating the same are disclosed. In one embodiment, the data transmission network device includes an encryption module and an RF transceiver. The encryption module is configured to receive data and encrypt the data so as to generate first encrypted data. The encryption module then encrypts the first encrypted data so as to generate second encrypted data. The RF transceiver is configured to generate an RF signal such that the second encrypted data is modulated onto the RF signal. By providing the double encryption in a single device, the data transmission network device has greater spectral efficiency and is a much more cost-effective solution than what is currently provided in the market. Furthermore, the encryption module can be disabled so that unsecure data can also be transmitted via the data transmission network device.
    Type: Grant
    Filed: May 15, 2019
    Date of Patent: August 23, 2022
    Inventors: Claude Church, Patrick L. Geddes
  • Patent number: 11417073
    Abstract: Systems, methods, devices, and non-transitory media of the various embodiments enable generating at least one hierarchical-level-of-detail (LOD) data structure in order to visualize and traverse measurement data associated with a three-dimensional (3D) model. In various embodiments, generating at least one hierarchical LOD data structure may include establishing a background grid comprising a mathematical grid structure defined in a common coordinate system, building a layout comprising an intermediary data structure, computing measurement data for each tile based at least in part on the height data samples, and storing at least a portion of the computed measurement data for each tile in a metadata file.
    Type: Grant
    Filed: July 15, 2021
    Date of Patent: August 16, 2022
    Assignee: CESIUM GS, INC.
    Inventors: Peter Gagliardi, Joshua Lawrence, Sean Lilley, Eli Bogomolny, Ian Lilley, Zakiuddin Shehzan Ayub Mohammed, Patrick Cozzi
  • Patent number: 11042961
    Abstract: A computer system and related computer-implemented methods are disclosed. The system is programmed to simplify one or more digital maps for a geographical region by reducing their sizes while maintaining their physical appearances to the human eyes.
    Type: Grant
    Filed: June 10, 2020
    Date of Patent: June 22, 2021
    Assignee: RISK MANAGEMENT SOLUTIONS, INC.
    Inventors: Julien Brown, Valli Gadiyaram Venkata, Shruthi Bhat
  • Patent number: 10884829
    Abstract: An improved buffer for networking devices and other computing devices comprises multiple memory instances, each having a distinct set of entries. Transport data units (“TDUs”) are divided into storage data units (“SDUs”), and each SDU is stored within a separate entry of a separate memory instance in a logical bank. A grid of the memory instances is organized into overlapping horizontal logical banks and vertical logical banks. A memory instance may be shared between horizontal and vertical logical banks. When overlapping logical banks are accessed concurrently, the memory instance that they share may be inaccessible to one of the logical banks. Accordingly, when writing a TDU, a parity SDU may be generated for the TDU and also stored within its logical bank. The TDU's content within the shared memory instance may then be reconstructed from the parity SDU without having to read the shared memory instance.
    Type: Grant
    Filed: May 5, 2020
    Date of Patent: January 5, 2021
    Assignee: Innovium, Inc.
    Inventor: Mohammad Kamel Issa
  • Patent number: 10607374
    Abstract: The present disclosure describes one or more embodiments of a selective raster image transformation system that quickly and efficiently generates enhanced digital images by selectively transforming edges in raster images to vector drawing segments. In particular, the selective raster image transformation system efficiently utilizes a content-aware, selective approach to identify, display, and transform selected edges of a raster image to a vector drawing segment based on sparse user interactions. In addition, the selective raster image transformation system employs a prioritized pixel line stepping algorithm to generate and provide pixel lines for selective edges of a raster image in real time, even on portable client devices.
    Type: Grant
    Filed: June 1, 2018
    Date of Patent: March 31, 2020
    Assignee: Adobe Inc.
    Inventor: John Peterson
  • Patent number: 10380030
    Abstract: A data processing apparatus comprising: at least one initiator device for issuing transactions, a hierarchical memory system comprising a plurality of caches and a memory and memory access control circuitry. The initiator device identifies storage locations using virtual addresses and the memory system stores data using physical addresses, the memory access control circuitry is configured to control virtual address to physical address translations. The plurality of caches, comprise a first cache and a second cache. The first cache is configured to store a plurality of address translations of virtual to physical addresses that the initiator device has requested. The second cache is configured to store a plurality of address translations of virtual to physical addresses that it is predicted that the initiator device will subsequently request. The first and second cache are arranged in parallel with each other such that the first and second caches can be accessed during a same access cycle.
    Type: Grant
    Filed: December 5, 2012
    Date of Patent: August 13, 2019
    Assignee: ARM Limited
    Inventor: Nitin Isloorkar
  • Patent number: 10366012
    Abstract: A method of GPU virtualization comprises allocating each virtual machine (or operating system running on a VM) an identifier by the hypervisor and then this identifier is used to tag every transaction deriving from a GPU workload operating within a given VM context (i.e. every GPU transaction on the system bus which interconnects the CPU, GPU and other peripherals). Additionally, dedicated portions of a memory resource (which may be GPU registers or RAM) are provided for each VM and while each VM can only see their allocated portion of the memory, a microprocessor within the GPU can see all of the memory. Access control is achieved using root memory management units which are configured by the hypervisor and which map guest physical addresses to actual memory addresses based on the identifier associated with the transaction.
    Type: Grant
    Filed: December 2, 2016
    Date of Patent: July 30, 2019
    Assignee: Imagination Technologies Limited
    Inventors: Dave Roberts, Mario Sopena Novales, John W. Howson
  • Patent number: 10127708
    Abstract: The invention notably relates to a computer-implemented method for managing a plurality of graphic cards, a graphic card comprising one or more graphic processing units, comprising loading a scene in a Render Engine, the scene comprising at least one graphic data to be used for rendering a view of the scene; creating an abstract graphic resource for a graphic resource of the at least one graphic data, the abstract graphic resource storing an identifier of a graphic resource for at least one of graphic card; copying, on the said at least one graphic card, the said graphic resource of the at least one graphic data; providing to the Render Engine with an access to the abstract graphic resource for handling the said graphic resource.
    Type: Grant
    Filed: December 29, 2016
    Date of Patent: November 13, 2018
    Assignee: DASSAULT SYSTEMES
    Inventors: Victor Bachet, Nicolas Jean, Nicolas Colombe
  • Patent number: 10116519
    Abstract: Aspects are described for managing a network of things and applications that are distributed, such as geographically or globally distributed. One exemplary aspect of the system and method is based on a centralized cloud-based processing unit that implements a Rule Processing Application (RPA) and compiles a set of User Rules. The execution of the User Rules is distributed across a number of independent Decision Making Algorithms (DMA). Each DMA can be implemented in one or more devices (e.g., servers, gateways, processing units, etc.) distributed across the network such as a worldwide network. One exemplary method also utilizes gateways within Local Area Networks (LANs) with the characteristics that (i) each gateway communicates with a centralized cloud-based processing unit and (ii) each gateway can respond to commands from the centralized cloud-based processing unit to alter the gateway's functionality and implement a DMA (in whole or in part).
    Type: Grant
    Filed: March 28, 2016
    Date of Patent: October 30, 2018
    Assignee: YODIWO AB
    Inventors: George Papadopoulos, Alexandros Maniatopoulos, Nikolaos Kostis, Petros Vasileiou, Sofia-Maria Dima, Per Mårtensson, Emmanouil Galetakis
  • Patent number: 9672583
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing GPU (Graphics Processing Unit) accelerated address translation for graphics virtualization. In one embodiment, such a system includes a main memory having a plurality of machine physical addresses; a graphics processor unit having graphics memory therein; an address translation service integrated with the graphics processor unit; a hypervisor to manage one or more guest machines; wherein the hypervisor is to configure a lookup table within the graphics memory of the graphics processor unit; and further wherein the address translation service of the graphics processor unit is to translate a guest physical address for one of the one or more guest machines to a corresponding machine physical address within the main memory. Such a graphics processor unit may be implemented separate from a system, for example, embodied within a silicon integrated circuit.
    Type: Grant
    Filed: December 21, 2011
    Date of Patent: June 6, 2017
    Assignee: Intel Corporation
    Inventors: Yunbiao Ben Lin, Jianghong Julie Du
  • Patent number: 9390462
    Abstract: An electronic device is described herein. The electronic device may include a page walker module to receive a page request of a graphics processing unit (GPU). The page walker module may detect a page fault associated with the page request. The electronic device may include a controller, at least partially comprising hardware logic. The controller is to monitor execution of the page request having the page fault. The controller determines whether to suspend execution of a work item at the GPU associated with the page request having the page fault, or to continue execution of the work item based on factors associated with the page request.
    Type: Grant
    Filed: March 27, 2013
    Date of Patent: July 12, 2016
    Assignee: Intel Corporation
    Inventors: Altug Koker, Balaji Vembu, Murali Ramadoss, Aditya Navale
  • Patent number: 9390007
    Abstract: A display system comprises a mapping memory comprising a plurality of memory banks configured to store a plurality of image tiles corresponding to an image, and an image mapping component configured to assign each of the plurality of tiles to one of the plurality of memory banks according to a first mapping or a second mapping, wherein the image mapping component determines whether to use the first or second mapping based on a bank interleaving metric of the first and second mappings.
    Type: Grant
    Filed: August 6, 2014
    Date of Patent: July 12, 2016
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae Young Hur, Sang woo Rhim, Beom Hak Lee
  • Patent number: 9310219
    Abstract: In a system that uses tile-based road network rendering for displaying map information to a user, a tile is rendered for display by rendering a front tile (20) with an appropriate texture to depict, e.g., the “ground” and a see-through shape (22) representing a feature that is below the ground level, and rendering a rear tile (21) that has drawn on it an image region (23) representing the intended below ground level feature, surrounded by a color or texture (24) that represents a border for that feature, to appear behind and slightly offset relative to the front tile (20), such that the image region (23) and its corresponding border (24) on the rear tile (21) can be seen through the see-through shape (22) in the front tile (20). In this way, a more visually appealing depiction of the below ground level feature can be achieved.
    Type: Grant
    Filed: October 4, 2010
    Date of Patent: April 12, 2016
    Assignees: TomTom International B.V., TomTom Software Ltd
    Inventors: Gary Pallett, Breght Roderick Boschker
  • Patent number: 9299121
    Abstract: Methods, systems, and computer readable media embodiments are disclosed for preemptive context-switching of processes running on a accelerated processing device. Embodiments include, detecting by an accelerated processing device a memory exception, and preempting a process from running on the accelerated processing device based upon the detected exception.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: March 29, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Robert Scott Hartog, Ralph Clay Taylor, Michael Mantor, Kevin McGrath, Sebastien Nussbaum, Nuwan Jayasena, Rex McCrary, Mark Leather, Philip J. Rogers, Thomas R. Woller
  • Patent number: 9256465
    Abstract: Methods, systems, and computer readable media embodiments are disclosed for preemptive context-switching of processes running on an accelerated processing device. A method includes, responsive to an exception upon access to a memory by a process running on a accelerated processing device, whether to preempt the process based on the exception, and preempting, based upon the determining, the process from running on the accelerated processing device.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: February 9, 2016
    Assignee: Advanced Micro Devices, Inc.
    Inventors: Robert Scott Hartog, Ralph Clay Taylor, Michael Mantor, Kevin McGrath, Sebastien Nussbaum, Nuwan Jayasena, Rex McCrary, Mark Leather, Philip J. Rogers, Thomas R. Woller
  • Patent number: 9245371
    Abstract: One embodiment of the present invention sets forth a method for storing processed data within buffer objects stored in buffer object memory from within shader engines executing on a GPU. The method comprises the steps of receiving a stream of one or more shading program commands via a graphics driver, executing, within a shader engine, at least one of the one or more shading program commands to generate processed data, determining from the stream of one or more shading program commands an address associated with a first data object stored within the buffer memory, and storing, from within the shader engine, the processed data in the first data object stored within the buffer memory.
    Type: Grant
    Filed: August 3, 2010
    Date of Patent: January 26, 2016
    Assignee: NVIDIA Corporation
    Inventors: Jeffrey A. Bolz, Patrick R. Brown
  • Patent number: 9176794
    Abstract: A method, system, and computer program product are disclosed for providing improved access to accelerated processing device compute resources to user mode applications. The functionality disclosed allows user mode applications to provide commands to an accelerated processing device without the need for kernel mode transitions in order to access a unified ring buffer. Instead, applications are each provided with their own buffers, which the accelerated processing device hardware can access to process commands. With full operating system support, user mode applications are able to utilize the accelerated processing device in much the same way as a CPU.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: November 3, 2015
    Assignees: Advanced Micro Devices, Inc., ATI Technologies ULC
    Inventors: Jeffrey Gongxian Cheng, Paul Blinzer, Mark Hummel, Leendert Peter Van Doorn
  • Patent number: 9142053
    Abstract: Systems and methods for compositing an image from display planes are disclosed. An internal matrix having transparency data indicating transparency of a macro block of a digital representation of a display is accessed. An external matrix is accessed if the internal matrix indicates the macro block includes a transparent and opaque pixel, wherein the external matrix has transparency data indicating transparency of each pixel in the macro block. A first display plane is read based on the transparency data indicating opaque pixels and the first display plane data is sent to a first buffer. Second display plane data of a second display plane is read and sent to a second buffer if the transparency data indicates transparent pixels. Control data is inserted into the first buffer accordingly such that an image is generated based on at least one of the first and second display plane data and the control data.
    Type: Grant
    Filed: November 15, 2013
    Date of Patent: September 22, 2015
    Assignee: nComputing, Inc.
    Inventors: Subir Ghosh, Anita Chowdhry
  • Patent number: 9123183
    Abstract: A multi-layer digital elevation model (DEM) structure is disclosed. A device may access a first structure that comprises a plurality of first elevation values and a plurality of location identifiers that may correspond to a geographic region. The first elevation values may be associated with a first layer in the geographic region and correspond to respective location identifiers. The device may access a second structure that identifies a second layer in the geographic region. Second elevation values that may correspond to at least some of the plurality of location identifiers may be determined. A multi-layer DEM structure may be generated that stores the first elevation values and the second elevation values in association with corresponding location identifiers.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: September 1, 2015
    Assignee: Lockheed Martin Corporation
    Inventors: Howell Hollis, Zach Barth
  • Patent number: 9123160
    Abstract: A mechanism for concurrently generating a plurality of meshes is disclosed. A region of a simulated environment for a simulation is determined. An area that bounds the region is determined. The area is decomposed into a plurality of polygons. Data identifying a first elevation layer at locations in the region and a second elevation layer at the locations in the region is accessed. At least some of the polygons are processed based on a first elevation layer metric associated with the first elevation layer and a second elevation layer metric associated with the second elevation layer to concurrently generate a first mesh and a second mesh that include the at least some of the polygons.
    Type: Grant
    Filed: September 12, 2012
    Date of Patent: September 1, 2015
    Assignee: Lockheed Martin Corporation
    Inventors: Howell Hollis, Sean McVey, Zach Barth
  • Patent number: 8994741
    Abstract: In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page.
    Type: Grant
    Filed: February 26, 2013
    Date of Patent: March 31, 2015
    Assignee: Apple Inc.
    Inventors: Joseph P. Bratt, Peter F. Holland
  • Patent number: 8970616
    Abstract: A method of displaying images, which includes displaying a first image in a first display portion, receiving a first image-quality condition for setting a first image quality of the first image displayed in the first display portion, generating, via a first display controller of the first display portion, a first image-quality setting image by applying the first image-quality condition to the first image displayed in the first display portion, transmitting the generated first image-quality setting image from the first display controller to a second display controller of a second display portion, and displaying the transmitted first image-quality setting image on the second display portion.
    Type: Grant
    Filed: March 3, 2010
    Date of Patent: March 3, 2015
    Assignee: LG Electronics Inc.
    Inventors: Jong Ha Lee, Duk Jun Jo
  • Publication number: 20150002526
    Abstract: In one embodiment, the present invention includes a device that has a device processor and a device memory. The device can couple to a host with a host processor and host memory. Both of the memories can have page tables to map virtual addresses to physical addresses of the corresponding memory, and the two memories may appear to a user-level application as a single virtual memory space. Other embodiments are described and claimed.
    Type: Application
    Filed: September 17, 2014
    Publication date: January 1, 2015
    Inventor: Boris Ginzburg
  • Publication number: 20140354667
    Abstract: In accordance with embodiments disclosed herein, there are provided methods, systems, mechanisms, techniques, and apparatuses for implementing GPU (Graphics Processing Unit) accelerated address translation for graphics virtualization. In one embodiment, such a system includes a main memory having a plurality of machine physical addresses; a graphics processor unit having graphics memory therein; an address translation service integrated with the graphics processor unit; a hypervisor to manage one or more guest machines; wherein the hypervisor is to configure a lookup table within the graphics memory of the graphics processor unit; and further wherein the address translation service of the graphics processor unit is to translate a guest physical address for one of the one or more guest machines to a corresponding machine physical address within the main memory. Such a graphics processor unit may be implemented separate from a system, for example, embodied within a silicon integrated circuit.
    Type: Application
    Filed: December 21, 2011
    Publication date: December 4, 2014
    Inventors: Yunbiao Lin, Jianghong Du
  • Patent number: 8810591
    Abstract: Virtualization of graphics resources and thread blocking is disclosed. In one exemplary embodiment, a system and method of a kernel in an operating system including generating a data structure having an identifier of a graphics resource assigned to a physical memory location in video memory, and blocking access to the physical memory location if a data within the physical memory location is in transition between video memory and system memory wherein a client application accesses memory in the system memory directly and accesses memory in the video memory through a virtual memory map.
    Type: Grant
    Filed: February 8, 2013
    Date of Patent: August 19, 2014
    Assignee: Apple Inc.
    Inventors: John Stauffer, Robert Beretta
  • Patent number: 8780129
    Abstract: A method and apparatus for hardware rotation is described. In one embodiment, the invention is an apparatus. The apparatus includes a direct access address translation component. The apparatus also includes a frame buffer coupled to the direct access address translation component. The apparatus further includes a 2D coordinate translation component. The apparatus also includes a 2D engine coupled to the 2D coordinate translation component and to the frame buffer. The apparatus further includes a 3D engine. The apparatus also include a 3D coordinate translation component coupled to the 3D engine and the frame buffer. As will be appreciated, further embodiments of the invention are within the spirit and scope of the claimed invention, and the specific details of a specific embodiment as described need not be present in all embodiments of the invention.
    Type: Grant
    Filed: May 5, 2010
    Date of Patent: July 15, 2014
    Assignee: Silicon Motion, Inc.
    Inventor: Frido Garritsen
  • Patent number: 8711156
    Abstract: A method and system for remapping units that are disabled to active units in a 3-D graphics pipeline. Specifically, in one embodiment, a method remaps processing elements in a pipeline of a graphics pipeline unit. Graphical input data are received. Then the number of enabled processing elements are determined from a plurality of processing elements. Each of the enabled processing elements are virtually addressed above a translator to virtually process the graphical input data. Then, the virtual addresses of each of the enabled processing elements are mapped to physical addresses of the enabled processing elements at the translator. The graphical input data are physically processed at the physical addresses of the enabled processing elements. In addition, each of the enabled processing elements are physically addressed below the translator to further process the graphical input data.
    Type: Grant
    Filed: September 30, 2004
    Date of Patent: April 29, 2014
    Assignee: Nvidia Corporation
    Inventors: Dominic Acocella, Timothy J. McDonald, Robert W. Gimby, Thomas H. Kong
  • Patent number: 8711161
    Abstract: A memory cell reconfiguration process is performed in accordance with the operational characteristic settings determined based upon the results of analysis and/or testing of memory cell operations. The memory circuit can include a plurality of memory cells and memory cell configuration controller. The memory cells store information associated with a variety of operations. The memory cell configuration controller coordinates selective enablement and disablement of each of the plurality of memory cells, which can be done on a subset or group basis (e.g., enables and/or disables memory cells on a word length or row by row basis). The address mapping can be adjusted so that the memory space appears continuous to external components. The memory cell configuration controller can also forward configuration information to upstream and/or downstream components that can adjust operations to compensate for the memory cell configuration (e.g., to prevent overflow).
    Type: Grant
    Filed: June 21, 2006
    Date of Patent: April 29, 2014
    Assignee: Nvidia Corporation
    Inventors: Stefan Scotzniovsky, Bruce Cory, Charles Chew-Yuen Young, Anthony M. Tamasi, James M. Van Dyke, John S. Montrym, Sean J. Treicher
  • Patent number: 8593472
    Abstract: One embodiment of the invention sets forth a mechanism for retrieving and storing data from/to a frame buffer via a storage driver included in a GPU driver. The storage driver includes three separate routines, the registration engine, the page-fault routine and the write-back routine, that facilitate the transfer of data between the frame buffer and the system memory. The registration engine registers a file system, corresponding to the frame buffer, the page-fault routine and the write-back routine with the VMM. The page-fault routine causes a portion of data stored in a specific memory location in the frame buffer to be transmitted to a corresponding memory location in the application memory. The write-back routine causes data stored in a particular memory location in the application memory to be transmitted to a corresponding memory location in the frame buffer.
    Type: Grant
    Filed: July 31, 2009
    Date of Patent: November 26, 2013
    Assignee: Nvidia Corporation
    Inventor: Franck Diard
  • Patent number: 8537169
    Abstract: One embodiment of the present invention sets forth a method for accessing, from within a graphics processing unit (GPU), data objects stored in a memory accessible by the GPU. The method comprises the steps of creating a data object in the memory based on a command received from an application program, transmitting an address associated with the data object to the application program for providing data associated with different draw commands to the GPU, receiving a first draw command and the address associated with the data object from the application program, and transmitting the first draw command and the address associated with the data object to the GPU for processing.
    Type: Grant
    Filed: March 1, 2010
    Date of Patent: September 17, 2013
    Assignee: Nvidia Corporation
    Inventors: Jeffrey A. Bolz, Eric S. Werness, Jason Sams
  • Patent number: 8531471
    Abstract: Embodiments of the invention provide a programming model for CPU-GPU platforms. In particular, embodiments of the invention provide a uniform programming model for both integrated and discrete devices. The model also works uniformly for multiple GPU cards and hybrid GPU systems (discrete and integrated). This allows software vendors to write a single application stack and target it to all the different platforms. Additionally, embodiments of the invention provide a shared memory model between the CPU and GPU. Instead of sharing the entire virtual address space, only a part of the virtual address space needs to be shared. This allows efficient implementation in both discrete and integrated settings.
    Type: Grant
    Filed: December 30, 2008
    Date of Patent: September 10, 2013
    Assignee: Intel Corporation
    Inventors: Hu Chen, Ying Gao, Zhou Xiaocheng, Shoumeng Yan, Peinan Zhang, Mohan Rajagopalan, Jesse Fang, Avi Mendelson, Bratin Saha
  • Patent number: 8504791
    Abstract: Intercepting a requested memory operation corresponding to a conventional memory is disclosed. The requested memory operation is translated to be applied to a structured memory.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: August 6, 2013
    Assignee: Hicamp Systems, Inc.
    Inventors: David R. Cheriton, Alexandre Y. Solomatnikov
  • Patent number: 8477145
    Abstract: A method and apparatus for creating, updating, and using guest physical address (GPA) to host physical address (HPA) shadow translation tables for translating GPAs of graphics data direct memory access (DMA) requests of a computing environment implementing a virtual machine monitor to support virtual machines. The requests may be sent through a render or display path of the computing environment from one or more virtual machines, transparently with respect to the virtual machine monitor. The creating, updating, and using may be performed by a memory controller detecting entries sent to existing global and page directory tables, forking off shadow table entries from the detected entries, and translating GPAs to HPAs for the shadow table entries.
    Type: Grant
    Filed: February 9, 2012
    Date of Patent: July 2, 2013
    Assignee: Intel Corporation
    Inventors: Balaji Vembu, Aditya Navale, Wishwesh Gandhi
  • Patent number: 8405668
    Abstract: In an embodiment, a display pipe includes one or more translation units corresponding to images that the display pipe is reading for display. Each translation unit may be configured to prefetch translations ahead of the image data fetches, which may prevent translation misses in the display pipe (at least in most cases). The translation units may maintain translations in first-in, first-out (FIFO) fashion, and the display pipe fetch hardware may inform the translation unit when a given translation or translation is no longer needed. The translation unit may invalidate the identified translations and prefetch additional translation for virtual pages that are contiguous with the most recently prefetched virtual page.
    Type: Grant
    Filed: November 19, 2010
    Date of Patent: March 26, 2013
    Assignee: Apple Inc.
    Inventors: Joseph P. Bratt, Peter F. Holland
  • Patent number: 8395635
    Abstract: A method for storing interpolation data is provided. The method uses a buffer in a cache memory and the concept of memory overlap record for storing previously calculated interpolation data, so as to avoid repeated interpolation, thereby decreasing the amount of system operation and the frequency of reading integer points for calculating interpolation from an external memory. Furthermore, a method of data storage for the buffer is provided. The storage method uses the concept of memory address rotation to store interpolation data beyond the boundary of the buffer. Moreover, another storage method is provided, which distributes interpolation data into a plurality of regions in the buffer according to different combinations of decimal coordinates of the interpolation points for economizing the use of memory space and simplifying the search of interpolation data in the buffer.
    Type: Grant
    Filed: November 22, 2006
    Date of Patent: March 12, 2013
    Assignee: Industrial Technology Research Institute
    Inventor: Jung-Yang Kao
  • Patent number: 8390633
    Abstract: A memory device comprises a memory array and a processing device. The memory array is configured to store a graphic data set. The processing device is configured to initiate outputting of data of the graphic data set from the memory array and to combine the outputted data in response to a read request for providing a graphic content.
    Type: Grant
    Filed: June 29, 2007
    Date of Patent: March 5, 2013
    Assignee: Qimonda AG
    Inventors: Christoph Bilger, Rex Kho, Achim Schramm, Martin Maier, Yann Zinzius, Armin Kohlhase
  • Patent number: 8373714
    Abstract: Virtualization of graphics resources and thread blocking is disclosed. In one exemplary embodiment, a system and method of a kernel in an operating system including generating a data structure having an identifier of a graphics resource assigned to a physical memory location in video memory, and blocking access to the physical memory location if a data within the physical memory location is in transition between video memory and system memory wherein a client application accesses memory in the system memory directly and accesses memory in the video memory through a virtual memory map.
    Type: Grant
    Filed: July 30, 2010
    Date of Patent: February 12, 2013
    Assignee: Apple Inc.
    Inventors: John Stauffer, Bob Beretta
  • Patent number: 8310495
    Abstract: In one aspect, an apparatus for driving display data includes an address mapping unit which generates second address units by dividing gradation data displayed on a plurality of pixels in a display panel into a plurality of first address units that are in the form of an a×b matrix, and mapping addresses of the gradation data in each of the first address units into the form of a b×a matrix, wherein the plurality of the first and second address units are arranged in the form of an M×N matrix, wherein a, b, M and N are natural numbers, and a is greater than b. The apparatus further includes a memory unit which stores the second address units having the mapped addresses in the form of a b×a matrix as units in the form of an M×N matrix, a data output unit which receives the data in a×N columns output from the memory unit and outputs the data as data in b×N columns, and a source driver block which receives the data in the b×N columns and transmitting the data to the display panel.
    Type: Grant
    Filed: September 19, 2007
    Date of Patent: November 13, 2012
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jong-kon Bae, Kyu-young Chung
  • Patent number: 8253734
    Abstract: The present invention is a system that grids original data, maps the data at the grid locations to height values at corresponding landscape image pixel locations and renders the landscape pixels into a three-dimensional (3D) landscape image. The landscape pixels can have arbitrary shapes and can be augmented with additional 3D information from the original data, such as an offset providing additional information, or generated from processing of the original data, such as to alert when a threshold is exceeded, or added for other purposes such as to point out a feature. The pixels can also convey additional information from the original data using other pixel characteristics such as texture, color, transparency, etc.
    Type: Grant
    Filed: July 23, 2010
    Date of Patent: August 28, 2012
    Assignee: Graphics Properties Holdings, Inc.
    Inventor: David William Hughes