Patents by Inventor Meghal Varia

Meghal Varia has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180302625
    Abstract: An exemplary method for intelligent compression defines a threshold value for a key performance indicator. Based on the key performance indicator value, data blocks generated by a producer component may be scaled down to reduce power and/or bandwidth consumption when being compressed according to a lossless compression module. The compressed data blocks are then stored in a memory component along with metadata that signals the scaling factor used prior to compression. Consumer components later retrieving the compressed data blocks from the memory component may decompress the data blocks and upscale, if required, based on the scaling factor signaled by the metadata.
    Type: Application
    Filed: April 18, 2017
    Publication date: October 18, 2018
    Inventors: SERAG GADELRAB, CHINCHUAN CHIU, MOINUL KHAN, KYLE ERNEWEIN, TOM LONGO, SIMON BOOTH, MEGHAL VARIA, MILIVOJE ALEKSIC, KING-CHUNG LAI
  • Publication number: 20180302624
    Abstract: An exemplary method for intelligent compression defines a threshold value for a temperature reading generated by a temperature sensor. Data blocks received into the compression module are compressed according to either a first mode or a second mode, the selection of which is determined based on a comparison of the active level for the temperature reading to the defined threshold value. The first compression mode may be associated with a lossless compression algorithm while the second compression mode is associated with a lossy compression algorithm. Or, both the first compression mode and the second compression mode may be associated with a lossless compression algorithm, however, for the first compression mode the received data blocks are produced at a default high quality level setting while for the second compression mode the received data blocks are produced at a reduced quality level setting.
    Type: Application
    Filed: April 18, 2017
    Publication date: October 18, 2018
    Inventors: SERAG GADELRAB, CHINCHUAN CHIU, MOINUL KHAN, KYLE ERNEWEIN, TOM LONGO, SIMON BOOTH, MEGHAL VARIA, MILIVOJE ALEKSIC
  • Publication number: 20180253236
    Abstract: A method and system for dynamic control of shared memory resources within a portable computing device (“PCD”) are disclosed. A limit request of an unacceptable deadline miss (“UDM”) engine of the portable computing device may be determined with a limit request sensor within the UDM element. Next, a memory management unit modifies a shared memory resource arbitration policy in view of the limit request. By modifying the shared memory resource arbitration policy, the memory management unit may smartly allocate resources to service translation requests separately queued based on having emanated from either a flooding engine or a non-flooding engine.
    Type: Application
    Filed: March 2, 2017
    Publication date: September 6, 2018
    Inventors: SERAG GADELRAB, Jason Edward Podaima, Kyle Ernewein, Meghal Varia
  • Patent number: 10067691
    Abstract: A method and system for dynamic control of shared memory resources within a portable computing device (“PCD”) are disclosed. A limit request of an unacceptable deadline miss (“UDM”) engine of the portable computing device may be determined with a limit request sensor within the UDM element. Next, a memory management unit modifies a shared memory resource arbitration policy in view of the limit request. By modifying the shared memory resource arbitration policy, the memory management unit may smartly allocate resources to service translation requests separately queued based on having emanated from either a flooding engine or a non-flooding engine.
    Type: Grant
    Filed: March 2, 2017
    Date of Patent: September 4, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Serag Gadelrab, Jason Edward Podaima, Kyle Ernewein, Meghal Varia
  • Patent number: 10037280
    Abstract: Systems and methods for pre-fetching address translations in a memory management unit (MMU) are disclosed. The MMU detects a triggering condition related to one or more translation caches associated with the MMU, the triggering condition associated with a trigger address, generates a sequence descriptor describing a sequence of address translations to pre-fetch into the one or more translation caches, the sequence of address translations comprising a plurality of address translations corresponding to a plurality of address ranges adjacent to an address range containing the trigger address, and issues an address translation request to the one or more translation caches for each of the plurality of address translations, wherein the one or more translation caches pre-fetch at least one address translation of the plurality of address translations into the one or more translation caches when the at least one address translation is not present in the one or more translation caches.
    Type: Grant
    Filed: May 29, 2015
    Date of Patent: July 31, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Jason Edward Podaima, Paul Christopher John Wiercienski, Kyle John Ernewein, Carlos Javier Moreira, Meghal Varia, Serag Gadelrab, Muhammad Umar Choudry
  • Patent number: 10019380
    Abstract: Providing memory management functionality using aggregated memory management units (MMUs), and related apparatuses and methods are disclosed. In one aspect, an aggregated MMU is provided, comprising a plurality of input data paths including each including plurality of input transaction buffers, and a plurality of output paths each including a plurality of output transaction buffers. Some aspects of the aggregated MMU additionally provide one or more translation caches and/or one or more hardware page table walkers The aggregated MMU further includes an MMU management circuit configured to retrieve a memory address translation request (MATR) from an input transaction buffer, perform a memory address translation operation based on the MATR to generate a translated memory address field (TMAF), and provide the TMAF to an output transaction buffer. The aggregated MMU also provides a plurality of output data paths, each configured to output transactions with resulting memory address translations.
    Type: Grant
    Filed: September 25, 2015
    Date of Patent: July 10, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Serag Monier GadelRab, Jason Edward Podaima, Ruolong Liu, Alexander Miretsky, Paul Christopher John Wiercienski, Kyle John Ernewein, Carlos Javier Moreira, Simon Peter William Booth, Meghal Varia, Thomas David Dryburgh
  • Patent number: 10007619
    Abstract: Systems and methods relate to performing address translations in a multithreaded memory management unit (MMU). Two or more address translation requests can be received by the multithreaded MMU and processed in parallel to retrieve address translations to addresses of a system memory. If the address translations are present in a translation cache of the multithreaded MMU, the address translations can be received from the translation cache and scheduled for access of the system memory using the translated addresses. If there is a miss in the translation cache, two or more address translation requests can be scheduled in two or more translation table walks in parallel.
    Type: Grant
    Filed: September 20, 2015
    Date of Patent: June 26, 2018
    Assignee: QUALCOMM Incorporated
    Inventors: Jason Edward Podaima, Paul Christopher John Wiercienski, Carlos Javier Moreira, Alexander Miretsky, Meghal Varia, Kyle John Ernewein, Manokanthan Somasundaram, Muhammad Umar Choudry, Serag Monier Gadelrab
  • Publication number: 20180165789
    Abstract: Techniques are described in which a device is configured to retrieve a metadata buffer for rendering a sub-frame of a set of sub-frames for a frame. A data block of a data buffer is configured to store image data for rendering the sub-frame. In response to determining, based on the metadata buffer for rendering the sub-frame, that the sub-frame includes a color pattern, fixed color value, or combination thereof, the device refrains from retrieving the image data from the data block of the data buffer and determines the image data for rendering the sub-frame based on the metadata buffer.
    Type: Application
    Filed: December 13, 2016
    Publication date: June 14, 2018
    Inventors: Andrew Evan Gruber, Serag GadelRab, Zhenbiao Ma, Meghal Varia, Tao Wang, Tom Longo, Mark Sternberg, Paul Chow
  • Patent number: 9792215
    Abstract: Methods and systems for pre-fetching address translations in a memory management unit (MMU) of a device are disclosed. In an embodiment, the MMU receives a pre-fetch command from an upstream component of the device, the pre-fetch command including an address of an instruction, pre-fetches a translation of the instruction from a translation table in a memory of the device, and stores the translation of the instruction in a translation cache associated with the MMU.
    Type: Grant
    Filed: March 28, 2015
    Date of Patent: October 17, 2017
    Assignee: QUALCOMM Incorporated
    Inventors: Jason Edward Podaima, Bohuslav Rychlik, Paul Christopher John Wiercienski, Kyle John Ernewein, Carlos Javier Moreira, Meghal Varia, Serag Gadelrab
  • Publication number: 20170228252
    Abstract: Various embodiments of methods and systems for managing compressed data transaction sizes in a system on a chip (“SoC”) in a portable computing device (“PCD”) are disclosed. Based on lengths of compressed data tiles associated in a group, wherein the compressed data tiles are comprised within a compressed image file, multiple compressed data tiles may be aggregated into a single, multi-tile transaction. A metadata file may be generated in association with the single multi-tile transaction to identify the transaction as a multi-tile transaction and provide offset data to distinguish data associated with the compressed data tiles. Using the metadata, embodiments of the solution may provide for random access and modification of the compressed data stored in association with a multi-tile transaction.
    Type: Application
    Filed: January 13, 2017
    Publication date: August 10, 2017
    Inventors: SERAG GADELRAB, MEGHAL VARIA, WISNU WURJANTARA, CLARA KA WAH SUNG, MARK STERNBERG, VLADAN ANDRIJANIC, ANTONIO RINALDI, VINOD CHAMARTY, POOJA SINHA, TAO WANG, ANDREW GRUBER
  • Publication number: 20170091116
    Abstract: Providing memory management functionality using aggregated memory management units (MMUs), and related apparatuses and methods are disclosed. In one aspect, an aggregated MMU is provided, comprising a plurality of input data paths including each including plurality of input transaction buffers, and a plurality of output paths each including a plurality of output transaction buffers. Some aspects of the aggregated MMU additionally provide one or more translation caches and/or one or more hardware page table walkers The aggregated MMU further includes an MMU management circuit configured to retrieve a memory address translation request (MATR) from an input transaction buffer, perform a memory address translation operation based on the MATR to generate a translated memory address field (TMAF), and provide the TMAF to an output transaction buffer. The aggregated MMU also provides a plurality of output data paths, each configured to output transactions with resulting memory address translations.
    Type: Application
    Filed: September 25, 2015
    Publication date: March 30, 2017
    Inventors: Serag Monier GadelRab, Jason Edward Podaima, Ruolong Liu, Alexander Miretsky, Paul Christopher John Wiercienski, Kyle John Ernewein, Carlos Javier Moreira, Simon Peter William Booth, Meghal Varia, Thomas David Dryburgh
  • Publication number: 20170083262
    Abstract: Systems, methods, and computer programs are disclosed for controlling memory frequency. One method comprises a first memory client generating a compressed data buffer and compression statistics related to the compressed data buffer. The compressed data buffer and the compression statistics are stored in a memory device. Based on the stored compression statistics, a frequency or voltage setting of the memory device is adjusted for enabling a second memory client to read the compressed data buffer.
    Type: Application
    Filed: January 13, 2016
    Publication date: March 23, 2017
    Inventors: SERAG GADELRAB, SUDEEP RAVI KOTTILINGAL, MEGHAL VARIA, POOJA SINHA, UJWAL PATEL, RUOLONG LIU, JEFFREY CHU, SINA GHOLAMIAN, HYUKJUNE CHUNG, DAVID STRASSER, RAGHAVENDRA NAGARAJ, ERIC DEMERS
  • Publication number: 20170024145
    Abstract: Systems, methods, and computer program products are disclosed for reducing latency in a system that includes one or more processing devices, a system memory, and a cache memory. A pre-fetch command that identifies requested data is received from a requestor device. The requested data is pre-fetched from the system memory into the cache memory in response to the pre-fetch command. The data pre-fetch may be preceded by a pre-fetch of an address translation. A data access request corresponding to the pre-fetch command is then received, and in response to the data access request the data is provided from the cache memory to the requestor device.
    Type: Application
    Filed: July 23, 2015
    Publication date: January 26, 2017
    Inventors: TAREK ZGHAL, Alain Dominique Artieri, Jason Edward Podaima, Meghal Varia, Serag GadelRab
  • Publication number: 20160350234
    Abstract: Systems and methods relate to performing address translations in a multithreaded memory management unit (MMU). Two or more address translation requests can be received by the multithreaded MMU and processed in parallel to retrieve address translations to addresses of a system memory. If the address translations are present in a translation cache of the multithreaded MMU, the address translations can be received from the translation cache and scheduled for access of the system memory using the translated addresses. If there is a miss in the translation cache, two or more address translation requests can be scheduled in two or more translation table walks in parallel.
    Type: Application
    Filed: September 20, 2015
    Publication date: December 1, 2016
    Inventors: Jason Edward PODAIMA, Paul Christopher John WIERCIENSKI, Carlos Javier MOREIRA, Alexander MIRETSKY, Meghal VARIA, Kyle John ERNEWEIN, Manokanthan SOMASUNDARAM, Muhammad Umar CHOUDRY, Serag Monier GADELRAB
  • Publication number: 20160350225
    Abstract: Systems and methods for pre-fetching address translations in a memory management unit (MMU) are disclosed. The MMU detects a triggering condition related to one or more translation caches associated with the MMU, the triggering condition associated with a trigger address, generates a sequence descriptor describing a sequence of address translations to pre-fetch into the one or more translation caches, the sequence of address translations comprising a plurality of address translations corresponding to a plurality of address ranges adjacent to an address range containing the trigger address, and issues an address translation request to the one or more translation caches for each of the plurality of address translations, wherein the one or more translation caches pre-fetch at least one address translation of the plurality of address translations into the one or more translation caches when the at least one address translation is not present in the one or more translation caches.
    Type: Application
    Filed: May 29, 2015
    Publication date: December 1, 2016
    Inventors: Jason Edward PODAIMA, Paul Christopher John WIERCIENSKI, Kyle John ERNEWEIN, Carlos Javier MOREIRA, Meghal VARIA, Serag GADELRAB, Muhammad Umar CHOUDRY
  • Publication number: 20160283384
    Abstract: Methods and systems for pre-fetching address translations in a memory management unit (MMU) of a device are disclosed. In an embodiment, the MMU receives a pre-fetch command from an upstream component of the device, the pre-fetch command including an address of an instruction, pre-fetches a translation of the instruction from a translation table in a memory of the device, and stores the translation of the instruction in a translation cache associated with the MMU.
    Type: Application
    Filed: March 28, 2015
    Publication date: September 29, 2016
    Inventors: Jason Edward PODAIMA, Bohuslav RYCHLIK, Paul Christopher John WIERCIENSKI, Kyle John ERNEWEIN, Carlos Javier MOREIRA, Meghal VARIA, Serag GADELRAB
  • Patent number: 8898603
    Abstract: A method for processing signals in a system includes deriving a signal activity for a signal from a timing requirement assignment for the signal.
    Type: Grant
    Filed: May 1, 2006
    Date of Patent: November 25, 2014
    Assignee: Altera Corporation
    Inventors: David Neto, Vaughn Betz, Jennifer Farrugia, Meghal Varia
  • Patent number: 8250500
    Abstract: A method for managing simulation includes modifying a design for a system to allow for a path pulse filter to filter a pathpulse delay, on a signal transmitted to a component, that is greater than an IOpath delay.
    Type: Grant
    Filed: May 1, 2006
    Date of Patent: August 21, 2012
    Assignee: Altera Corporation
    Inventors: David Neto, Vaughn Betz, Jennifer Farrugia, Meghal Varia
  • Patent number: 7877710
    Abstract: A method for managing vectorless estimation includes identifying a semantic structure. A signal activity is assigned to an output of the semantic structure. Vectorless estimation is performed on non-semantic structures.
    Type: Grant
    Filed: May 1, 2006
    Date of Patent: January 25, 2011
    Assignee: Altera Corporation
    Inventors: David Neto, Vaughn Betz, Meghal Varia, Gregg William Baeckler