Patents by Inventor Murali Sundaresan
Murali Sundaresan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240354559Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: ApplicationFiled: April 25, 2024Publication date: October 24, 2024Applicant: Intel CorporationInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Patent number: 12001944Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine an optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: GrantFiled: July 27, 2022Date of Patent: June 4, 2024Assignee: INTEL CORPORATIONInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Publication number: 20230017304Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: ApplicationFiled: July 27, 2022Publication date: January 19, 2023Applicant: Intel CorporationInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Patent number: 11531623Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.Type: GrantFiled: February 19, 2021Date of Patent: December 20, 2022Assignee: Intel CorporationInventors: Jayanth N. Rao, Murali Sundaresan
-
Patent number: 11488005Abstract: A mechanism is described for facilitating smart collection of data and smart management of autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and combining a first computation directed to be performed locally at a local computing device with a second computation directed to be performed remotely at a remote computing device in communication with the local computing device over the one or more networks, where the first computation consumes low power, wherein the second computation consumes high power.Type: GrantFiled: July 22, 2019Date of Patent: November 1, 2022Assignee: Intel CorporationInventors: Brian T. Lewis, Feng Chen, Jeffrey R. Jackson, Justin E. Gottschlich, Rajkishore Barik, Xiaoming Chen, Prasoonkumar Surti, Mike B. Macpherson, Murali Sundaresan
-
Patent number: 11410024Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: GrantFiled: April 28, 2017Date of Patent: August 9, 2022Assignee: INTEL CORPORATIONInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Publication number: 20210350215Abstract: A mechanism is described for facilitating efficient training of neural networks at computing devices. A method of embodiments, as described herein, includes detecting one or more inputs for training of a neural network, and introducing randomness in floating point (FP) numbers to prevent overtraining of the neural network, where introducing randomness includes replacing less-significant low-order bits of operand and result values with new low-order bits during the training of the neural network.Type: ApplicationFiled: May 18, 2021Publication date: November 11, 2021Applicant: Intel CorporationInventors: Brian T. Lewis, Rajkishore Barik, Murali Sundaresan, Leonard Truong, Feng Chen, Xiaoming Chen, Mike B. MacPherson
-
Publication number: 20210286733Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.Type: ApplicationFiled: February 19, 2021Publication date: September 16, 2021Applicant: Intel CorporationInventors: Jayanth N. Rao, Murali Sundaresan
-
Patent number: 11017291Abstract: A mechanism is described for facilitating efficient training of neural networks at computing devices. A method of embodiments, as described herein, includes detecting one or more inputs for training of a neural network, and introducing randomness in floating point (FP) numbers to prevent overtraining of the neural network, where introducing randomness includes replacing less-significant low-order bits of operand and result values with new low-order bits during the training of the neural network.Type: GrantFiled: April 28, 2017Date of Patent: May 25, 2021Assignee: INTEL CORPORATIONInventors: Brian T. Lewis, Rajkishore Barik, Murali Sundaresan, Leonard Truong, Feng Chen, Xiaoming Chen, Mike B. Macpherson
-
Patent number: 10929304Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.Type: GrantFiled: December 13, 2018Date of Patent: February 23, 2021Assignee: Intel CorporationInventors: Jayanth N. Rao, Murali Sundaresan
-
Publication number: 20200019844Abstract: A mechanism is described for facilitating smart collection of data and smart management of autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and combining a first computation directed to be performed locally at a local computing device with a second computation directed to be performed remotely at a remote computing device in communication with the local computing device over the one or more networks, where the first computation consumes low power, wherein the second computation consumes high power.Type: ApplicationFiled: July 22, 2019Publication date: January 16, 2020Applicant: Intel CorporationInventors: Brian T. Lewis, Feng Chen, Jeffrey R. Jackson, Justin E. Gottschlich, Rajkishore Barik, Xiaoming Chen, Prasoonkumar Surti, Mike B. Macpherson, Murali Sundaresan
-
Patent number: 10410115Abstract: A mechanism is described for facilitating smart collection of data and smart management of autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and combining a first computation directed to be performed locally at a local computing device with a second computation directed to be performed remotely at a remote computing device in communication with the local computing device over the one or more networks, where the first computation consumes low power, wherein the second computation consumes high power.Type: GrantFiled: April 28, 2017Date of Patent: September 10, 2019Assignee: INTEL CORPORATIONInventors: Brian T. Lewis, Feng Chen, Jeffrey R. Jackson, Justin E. Gottschlich, Rajkishore Barik, Xiaoming Chen, Prasoonkumar Surti, Mike B. Macpherson, Murali Sundaresan
-
Publication number: 20190114267Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.Type: ApplicationFiled: December 13, 2018Publication date: April 18, 2019Applicant: Intel CorporationInventors: Jayanth N. Rao, Murali Sundaresan
-
Patent number: 10198361Abstract: A method and system for sharing memory between a central processing unit (CPU) and a graphics processing unit (GPU) of a computing device are disclosed herein. The method includes allocating a surface within a physical memory and mapping the surface to a plurality of virtual memory addresses within a CPU page table. The method also includes mapping the surface to a plurality of graphics virtual memory addresses within an I/O device page table.Type: GrantFiled: June 30, 2016Date of Patent: February 5, 2019Assignee: Intel CorporationInventors: Jayanth N. Rao, Murali Sundaresan
-
Publication number: 20180314250Abstract: A mechanism is described for facilitating smart collection of data and smart management of autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and combining a first computation directed to be performed locally at a local computing device with a second computation directed to be performed remotely at a remote computing device in communication with the local computing device over the one or more networks, where the first computation consumes low power, wherein the second computation consumes high power.Type: ApplicationFiled: April 28, 2017Publication date: November 1, 2018Applicant: Intel CorporationInventors: Brian T. Lewis, Feng Chen, Jeffrey R. Jackson, Justin E. Gottschlich, Rajkishore Barik, Xiaoming Chen, Prasoonkumar Surti, Mike B. Macpherson, Murali Sundaresan
-
Publication number: 20180314935Abstract: A mechanism is described for facilitating efficient training of neural networks at computing devices. A method of embodiments, as described herein, includes detecting one or more inputs for training of a neural network, and introducing randomness in floating point (FP) numbers to prevent overtraining of the neural network, where introducing randomness includes replacing less-significant low-order bits of operand and result values with new low-order bits during the training of the neural network.Type: ApplicationFiled: April 28, 2017Publication date: November 1, 2018Applicant: Intel CorporationInventors: Brian T. Lewis, Rajkishore Barik, Murali Sundaresan, Leonard Truong
-
Publication number: 20180314936Abstract: A mechanism is described for facilitating smart distribution of resources for deep learning autonomous machines. A method of embodiments, as described herein, includes detecting one or more sets of data from one or more sources over one or more networks, and introducing a library to a neural network application to determine optimal point at which to apply frequency scaling without degrading performance of the neural network application at a computing device.Type: ApplicationFiled: April 28, 2017Publication date: November 1, 2018Applicant: Intel CorporationInventors: Rajkishore Barik, Brian T. Lewis, Murali Sundaresan, Jeffrey Jackson, Feng Chen, Xiaoming Chen, Mike Macpherson
-
Patent number: 9928170Abstract: In accordance with some embodiments, a scatter/gather memory approach may be enabled that is exposed or backed by system memory and uses conventional tags and addresses. Thus, such a technique may be more amenable to conventional software developers and their conventional techniques.Type: GrantFiled: September 13, 2016Date of Patent: March 27, 2018Assignee: Intel CorporationInventors: Altug Koker, Thomas A. Piazza, Murali Sundaresan
-
Patent number: 9645795Abstract: Determining a class of an object is disclosed. A pointer of the object is obtained. One or more bits that are not implemented as address bits are extracted from the pointer. The one or more bits are interpreted as an identifier of the class of the object. The class of the object is determined to correspond to the identifier.Type: GrantFiled: August 12, 2014Date of Patent: May 9, 2017Assignee: Azul Systems, Inc.Inventors: Gil Tene, Murali Sundaresan, Michael A. Wolf
-
Patent number: 9632801Abstract: Conversion of an array of structures (AOS) to a structure of arrays (SOA) improves the efficiency of transfer from the AOS to the SOA. A similar technique can be used to convert efficiently from an SOA to an AOS. The controller performing the conversion computes a partition size as the highest common factor between the structure size of structures in AOS and the number of banks in a first memory device, and transfers data based on the partition size, rather than on the structure size. The controller can read a partition size number of elements from multiple different structures to ensure that full data transfer bandwidth is used for each transfer.Type: GrantFiled: April 9, 2014Date of Patent: April 25, 2017Assignee: Intel CorporationInventors: Supratim Pal, Murali Sundaresan