Patents by Inventor Mohammad R. Haghighat
Mohammad R. Haghighat has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12167014Abstract: Systems, apparatuses and methods may provide for source device technology that identifies a plurality of object regions in a video frame, automatically generates context information for the video frame on a per-object region basis and embeds the context information in a signal containing the video frame. Additionally, playback device technology may decode a signal containing a video frame and embedded context information, identifies a plurality of object regions in the video frame based on the embedded context information, and automatically selects one or more post-processing configurations for the video frame on a per-object region basis.Type: GrantFiled: May 22, 2020Date of Patent: December 10, 2024Assignee: Intel CorporationInventors: Changliang Wang, Mohammad R. Haghighat, Wei Hu, Tao Xu, Tianmi Chen, Bin Yang, Jia Bao, Raul Diaz
-
Patent number: 12086657Abstract: Systems, apparatuses and methods may provide for technology that detects a tensor operation in an application, wherein the tensor operation has an unspecified tensor input size, determines the input tensor size at runtime, and selects a partition configuration for the tensor operation based at least in part on the input tensor size and one or more runtime conditions. In one example, the technology searches a lookup table for the input tensor size and at least one of the runtime condition(s) to select the partition configuration.Type: GrantFiled: April 24, 2023Date of Patent: September 10, 2024Assignee: Intel CorporationInventors: Sara Baghsorkhi, Mohammad R. Haghighat
-
Patent number: 12045677Abstract: Systems, apparatuses and methods may provide for technology that detects a generic cloud service call in an application, wherein platform-specific parameters are unspecified in the cloud service call. The technology may also select a first cloud platform based on one or more performance constraints associated with the first cloud platform and automatically generate a first platform-specific service call based on the cloud service call and the first set of parameters. In one example, the technology also maps the cloud service call to the first platform-specific service call. Additionally, the technology may migrate the cloud service call to a second cloud platform without rewriting the generic cloud service call.Type: GrantFiled: December 20, 2019Date of Patent: July 23, 2024Assignee: Intel CorporationInventors: Sara Baghsorkhi, Mohammad R. Haghighat
-
Patent number: 11922220Abstract: Embodiments of systems, apparatuses and methods provide enhanced function as a service (FaaS) to users, e.g., computer developers and cloud service providers (CSPs). A computing system configured to provide such enhanced FaaS service include one or more controls architectural subsystems, software and orchestration subsystems, network and storage subsystems, and security subsystems. The computing system executes functions in response to events triggered by the users in an execution environment provided by the architectural subsystems, which represent an abstraction of execution management and shield the users from the burden of managing the execution. The software and orchestration subsystems allocate computing resources for the function execution by intelligently spinning up and down containers for function code with decreased instantiation latency and increased execution scalability while maintaining secured execution.Type: GrantFiled: April 16, 2019Date of Patent: March 5, 2024Assignee: Intel CorporationInventors: Mohammad R. Haghighat, Kshitij Doshi, Andrew J. Herdrich, Anup Mohan, Ravishankar R. Iyer, Mingqiu Sun, Krishna Bhuyan, Teck Joo Goh, Mohan J. Kumar, Michael Prinke, Michael Lemay, Leeor Peled, Jr-Shian Tsai, David M. Durham, Jeffrey D. Chamberlain, Vadim A. Sukhomlinov, Eric J. Dahlen, Sara Baghsorkhi, Harshad Sane, Areg Melik-Adamyan, Ravi Sahita, Dmitry Yurievich Babokin, Ian M. Steiner, Alexander Bachmutsky, Anil Rao, Mingwei Zhang, Nilesh K. Jain, Amin Firoozshahian, Baiju V. Patel, Wenyong Huang, Yeluri Raghuram
-
Patent number: 11880669Abstract: Systems, apparatuses and methods may provide for technology that generates a first compiler output based on input code that includes dynamically typed variable information and generates a second compiler output based on the input code, wherein the second compiler output includes type check code to verify one or more type inferences associated with the first compiler output. The technology may also execute the first compiler output and the second compiler output in parallel via different threads.Type: GrantFiled: October 8, 2019Date of Patent: January 23, 2024Assignee: Intel CorporationInventors: Shiyu Zhang, Junyong Ding, Tianyou Li, Mohammad R. Haghighat
-
Patent number: 11798118Abstract: Systems, apparatuses and methods may provide for technology that sends a first message via an input output (IO) link, wherein the first message includes a first rendering asset and an identifier (ID) associated with the first rendering asset. The technology may also exclude a second rendering asset from a second message in response to the ID being shared by the first rendering asset and the second rendering asset and send the second message via the IO link, wherein the second message includes the ID. In one example, the ID is a hash ID.Type: GrantFiled: December 20, 2019Date of Patent: October 24, 2023Assignee: Intel CorporationInventors: Changliang Wang, Mohammad R. Haghighat, Yong Yao, Xiaocheng Mao, Yifei Xue, Bin Yang, Jia Bao, Raul Diaz
-
Publication number: 20230289243Abstract: Systems, apparatuses and methods may provide for technology that detects a tensor operation in an application, wherein the tensor operation has an unspecified tensor input size, determines the input tensor size at runtime, and selects a partition configuration for the tensor operation based at least in part on the input tensor size and one or more runtime conditions. In one example, the technology searches a lookup table for the input tensor size and at least one of the runtime condition(s) to select the partition configuration.Type: ApplicationFiled: April 24, 2023Publication date: September 14, 2023Applicant: Intel CorporationInventors: Sara Baghsorkhi, Mohammad R. Haghighat
-
Patent number: 11704601Abstract: Systems, apparatuses and methods may provide for technology that generates inclusion data in accordance with a Poisson distribution, wherein the inclusion data specifies a number of inclusions for each observation in a set of observations. The technology may also train a first decision tree in a random forest based at least in part on the inclusion data.Type: GrantFiled: December 16, 2019Date of Patent: July 18, 2023Assignee: Intel CorporationInventors: Mikhail Averbukh, Mohammad R. Haghighat
-
Publication number: 20230171420Abstract: Systems, apparatuses and methods may provide for source device technology that identifies a plurality of object regions in a video frame, automatically generates context information for the video frame on a per-object region basis and embeds the context information in a signal containing the video frame. Additionally, playback device technology may decode a signal containing a video frame and embedded context information, identifies a plurality of object regions in the video frame based on the embedded context information, and automatically selects one or more post-processing configurations for the video frame on a per-object region basis.Type: ApplicationFiled: May 22, 2020Publication date: June 1, 2023Inventors: Changliang WANG, Mohammad R. HAGHIGHAT, Wei HU, Tao XU, Tianmi CHEN, Bin YANG, Jia BAO, Raul DIAZ
-
Patent number: 11663056Abstract: Systems, apparatuses and methods may provide for technology that detects a tensor operation in an application, wherein the tensor operation has an unspecified tensor input size, determines the input tensor size at runtime, and selects a partition configuration for the tensor operation based at least in part on the input tensor size and one or more runtime conditions. In one example, the technology searches a lookup table for the input tensor size and at least one of the runtime condition(s) to select the partition configuration.Type: GrantFiled: December 20, 2019Date of Patent: May 30, 2023Assignee: Intel CorporationInventors: Sara Baghsorkhi, Mohammad R. Haghighat
-
Patent number: 11520501Abstract: Systems, apparatuses and methods may provide for technology that identifies a prioritization data structure associated with a function, wherein the prioritization data structure lists hardware resource types in priority order. The technology may also allocate a first type of hardware resource to the function if the first type of hardware resource is available, wherein the first type of hardware resource has a highest priority in the prioritization data structure. Additionally, the technology may allocate, in the priority order, a second type of hardware resource to the function if the first type of hardware resource is not available.Type: GrantFiled: December 20, 2019Date of Patent: December 6, 2022Assignee: Intel CorporationInventors: Mohammad R. Haghighat, Sara Baghsorkhi
-
Publication number: 20220374158Abstract: Systems, apparatuses and methods may provide technology for managing a runtime computing environment having tiered object memory placement that assigns a hotness score to an object having an object type based on an invocation count of objects referenced by a hot method, allocates a newly-created object to one of a hot object heap, said hot object heap assigned to store hot objects in a first memory tier, or a cold object heap, said cold object heap assigned to store cold objects in a second memory tier, based on the hotness score associated with the object type for the newly-created object, and migrates a plurality of objects between the hot object heap and the cold object heap based on a hotness score associated with each object. The technology may also operate the object migration in an execution thread independent of an execution thread for the object allocation.Type: ApplicationFiled: December 20, 2019Publication date: November 24, 2022Inventors: Bin Yang, Chao Xie, Dong-Yuan Chen, Jia Bao, Mingqiu Sun, Mohammad R. Haghighat, Qiming Shi, Zhen Zhou
-
Publication number: 20220342684Abstract: Method, systems and apparatuses may include technology that identifies a second computing platform that includes a second hardware device that satisfies one or more conditions. The second hardware device is associated with a hardware abstraction layer on the second computing platform. The second computing platform is coupled to a first computing platform. The technology may further include generating a virtual hardware abstraction layer that is to represent the hardware abstraction layer on the second computing platform.Type: ApplicationFiled: December 19, 2019Publication date: October 27, 2022Applicant: INTEL CORPORATIONInventors: Mohammad R. Haghighat, Yong Yao, Bin Yang, Ignacio Alvarez, Jia Bao
-
Publication number: 20220326921Abstract: Systems, apparatuses and methods may provide for technology that generates a first compiler output based on input code that includes dynamically typed variable information and generates a second compiler output based on the input code, wherein the second compiler output includes type check code to verify one or more type inferences associated with the first compiler output. The technology may also execute the first compiler output and the second compiler output in parallel via different threads.Type: ApplicationFiled: October 8, 2019Publication date: October 13, 2022Inventors: Shiyu Zhang, Junyong Ding, Tianyou LI, Mohammad R. Haghighat
-
Publication number: 20220318943Abstract: Systems, apparatuses and methods may provide for technology that sends a first message via an input output (IO) link, wherein the first message includes a first rendering asset and an identifier (ID) associated with the first rendering asset. The technology may also exclude a second rendering asset from a second message in response to the ID being shared by the first rendering asset and the second rendering asset and send the second message via the IO link, wherein the second message includes the ID. In one example, the ID is a hash ID.Type: ApplicationFiled: December 20, 2019Publication date: October 6, 2022Inventors: Changliang Wang, Mohammad R. Haghighat, Yong Yao, Xiaocheng Mao, Yifei Xue, Bin Yang, Jia Bao, Raul Diaz
-
Patent number: 11249910Abstract: Systems, apparatuses and methods may provide for technology that detects a runtime call to a communication library, wherein the runtime call identifies a memory buffer, determines that a class of service (CLOS) attribute is associated with the memory buffer, and issues a driver instruction to modify the CLOS attribute in response to the runtime call.Type: GrantFiled: December 17, 2019Date of Patent: February 15, 2022Assignee: Intel CorporationInventors: Aravindh Anantaraman, Srinivas Sridharan, Ajaya Durg, Mohammad R. Haghighat, Mikhail E. Smorkalov, Sudarshan Srinivasan
-
Publication number: 20210263779Abstract: Embodiments of systems, apparatuses and methods provide enhanced function as a service (FaaS) to users, e.g., computer developers and cloud service providers (CSPs). A computing system configured to provide such enhanced FaaS service include one or more controls architectural subsystems, software and orchestration subsystems, network and storage subsystems, and security subsystems. The computing system executes functions in response to events triggered by the users in an execution environment provided by the architectural subsystems, which represent an abstraction of execution management and shield the users from the burden of managing the execution. The software and orchestration subsystems allocate computing resources for the function execution by intelligently spinning up and down containers for function code with decreased instantiation latency and increased execution scalability while maintaining secured execution.Type: ApplicationFiled: April 16, 2019Publication date: August 26, 2021Applicant: Intel CorporationInventors: Mohammad R. Haghighat, Kshitij Doshi, Andrew J. Herdrich, Anup Mohan, Ravishankar R. Iyer, Mingqiu Sun, Krishna Bhuyan, Teck Joo Goh, Mohan J. Kumar, Michael Prinke, Michael Lemay, Leeor Peled, Jr-Shian Tsai, David M. Durham, Jeffrey D. Chamberlain, Vadim A. Sukhomlinov, Eric J. Dahlen, Sara Baghsorkhi, Harshad Sane, Areg Melik-Adamyan, Ravi Sahita, Dmitry Yurievich Babokin, Ian M. Steiner, Alexander Bachmutsky, Anil Rao, Mingwei Zhang, Nilesh K. Jain, Amin Firoozshahian, Baiju V. Patel, Wenyong Huang, Yeluri Raghuram
-
Publication number: 20200133537Abstract: Systems, apparatuses and methods may provide for technology that identifies a prioritization data structure associated with a function, wherein the prioritization data structure lists hardware resource types in priority order. The technology may also allocate a first type of hardware resource to the function if the first type of hardware resource is available, wherein the first type of hardware resource has a highest priority in the prioritization data structure. Additionally, the technology may allocate, in the priority order, a second type of hardware resource to the function if the first type of hardware resource is not available.Type: ApplicationFiled: December 20, 2019Publication date: April 30, 2020Inventors: Mohammad R. Haghighat, Sara Baghsorkhi
-
Publication number: 20200137163Abstract: Systems, apparatuses and methods may provide for technology that detects a generic cloud service call in an application, wherein platform-specific parameters are unspecified in the cloud service call. The technology may also select a first cloud platform based on one or more performance constraints associated with the first cloud platform and automatically generate a first platform-specific service call based on the cloud service call and the first set of parameters. In one example, the technology also maps the cloud service call to the first platform-specific service call. Additionally, the technology may migrate the cloud service call to a second cloud platform without rewriting the generic cloud service call.Type: ApplicationFiled: December 20, 2019Publication date: April 30, 2020Inventors: Sara Baghsorkhi, Mohammad R. Haghighat
-
Publication number: 20200133743Abstract: Systems, apparatuses and methods may provide for technology that detects a tensor operation in an application, wherein the tensor operation has an unspecified tensor input size, determines the input tensor size at runtime, and selects a partition configuration for the tensor operation based at least in part on the input tensor size and one or more runtime conditions. In one example, the technology searches a lookup table for the input tensor size and at least one of the runtime condition(s) to select the partition configuration.Type: ApplicationFiled: December 20, 2019Publication date: April 30, 2020Inventors: Sara Baghsorkhi, Mohammad R. Haghighat