Patents by Inventor Ganesh Ananthanarayanan
Ganesh Ananthanarayanan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240421859Abstract: A real-time radio intelligent controller (RIC) executes in parallel with one or more virtual radio access network functions to provide real-time analytics and control of the virtual radio access network functions. At least a first processor core is configured to execute a radio network virtual function. The radio network virtual function is configured with a codelet to output selected operational data to a first stream associated with a first stream ID and receive control information from a control stream associated with a second stream ID. At least a second processor core is configured to execute the real-time RIC isolated from the at least the first processor core. The real-time RIC includes one or more dynamically loaded programs configured to: access the first stream; perform processing on the operational data; and write commands for the radio network virtual function to the control stream.Type: ApplicationFiled: June 13, 2023Publication date: December 19, 2024Inventors: Bozidar RADUNOVIC, Daehyeok Kim, Ganesh Ananthanarayanan, Xenofon Foukas
-
Publication number: 20240422576Abstract: A system, method, and computer-readable media for executing applications for radio interface controller (RIC) management are disclosed. The system includes one or more far-edge datacenters including first computing resources configured to execute a radio access network (RAN) function and a real-time RIC; one or more near-edge datacenters including second computing resources configured to execute a core network function and at least one of a near-real-time RIC or a non-real-time RIC; and a central controller. The central controller is configured to: receive inputs of application requirements, hardware constraints, and a capacity of the first and the second computing resources; select, based on a policy applied to the inputs, a location a far-edge datacenter or a near-edge datacenters for executing each of a plurality of applications to form a pipeline; and deploy each of the applications to the real-time RIC, the near-real-time RIC, or the non-real-time RIC based on the selected location.Type: ApplicationFiled: June 13, 2023Publication date: December 19, 2024Inventors: Daehyeok KIM, Ganesh ANANTHANARAYANAN, Bozidar RADUNOVIC, Xenofon FOUKAS
-
Publication number: 20240419698Abstract: A context analysis system receives a query from a user. The context analysis system generates one or multiple context profiles and generates a prompt for a foundation model for each of the context profiles. The context analysis system analyzes each of the context profiles and generates a relevancy score. The context analysis system selects one of the context profiles based on the relevancy score. In some examples, the context analysis system iteratively determines predicted latencies and relevancies of processing a query in conjunction with a generated context and, based on the predicted latencies and/or relevancies, processes the query using a foundation model, such as a large language model (LLM).Type: ApplicationFiled: June 15, 2023Publication date: December 19, 2024Inventors: Ganesh ANANTHANARAYANAN, Manikanta KOTARU, Paramvir BAHL
-
Publication number: 20240412096Abstract: Optimizing ML pipeline deployment using an ML pipeline management system. A method includes receiving an indication of an input data source and input data type from the input data source. An indication of a plurality filters to be included in the pipeline, an ML model, and predetermined performance criteria is received. The method includes determining a physical topology of the ML pipeline and configuration of the filters or the ML model. The determined physical topology includes placement of the filters and the model, and the configuration. The determined physical topology satisfies the performance criteria. The filters and ML model are placed across an infrastructure, comprising a plurality of tiers, according to the determined physical topology.Type: ApplicationFiled: June 9, 2023Publication date: December 12, 2024Inventors: Anand PADMANABHA IYER, Ganesh ANANTHANARAYANAN, Yiwen ZHANG
-
Publication number: 20240256922Abstract: Systems and methods are provided for dynamically adapting configuration setting associated with capturing content as input data for inferencing in the Multi-Access Edge Computing in a 5G telecommunication network. The inferencing is based on a use of a deep neural network. In particular, the method includes determining a gradient of a change in inference data over a change in configuration setting for capturing input data (the inference-configuration gradient). The method further updates the configuration setting based on the gradient of a change in inference data over a change in the configuration setting. The inference-configuration gradient is based on a combination of an input-configuration gradient and an inference-input gradient. The input-configuration gradient indicates a change in input data as the configuration setting value changes. The inference-input gradient indicates, as a saliency of the deep neural network, a change in inference result of the input data as the input data changes.Type: ApplicationFiled: January 31, 2023Publication date: August 1, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yuanchao SHU
-
Publication number: 20240171391Abstract: The techniques described herein use an edge device to manage the security for a data stream being ingested by a tenant and a cloud platform. The creation of the data stream for ingestion occurs in an environment that is trusted by a tenant (e.g., an on-premises enterprise network). The cloud platform that is part of the data stream ingestion process is outside this trusted environment, and thus, the tenant loses an element of security when ingesting data streams for cloud storage and/or cloud processing. Accordingly, the edge device is configured on a trust boundary so that the data stream ingestion process associated with a cloud platform is secured, or trusted by the tenant. The edge device is configured to encrypt the data stream using a data encryption key and/or manage the protection of the data encryption key.Type: ApplicationFiled: November 18, 2022Publication date: May 23, 2024Inventors: Ganesh ANANTHANARAYANAN, Ramarathnam VENKATESAN, Yuanchao SHU, Kiran MUTHABATULLA, Yoganand RAJASEKARAN
-
Publication number: 20240121081Abstract: An access control system is disclosed for controlling access to a resource. A request is received by a location attribute policy (LAP) server to access an encrypted resource. The LAP server accesses a resource policy that identifies requirements for granting access to the encrypted resource, such as a list of attributes of the requestor that are required and a dynamic attribute requirement of the requestor. The LAP server receives a cryptographic proof from the computing device that the requestor possesses the attributes and validates the proof based at least on information obtained from a trusted ledger. Once the proof is validated, the LAP server provides a shared secret associated with the dynamic attribute requirement to a decryption algorithm. The decryption algorithm uses the dynamic attribute shared secret in combination with one or more attribute shared secrets from the requestor to generate a decryption key for the encrypted resource.Type: ApplicationFiled: October 10, 2022Publication date: April 11, 2024Inventors: Ramarathnam VENKATESAN, Nishanth CHANDRAN, Ganesh ANANTHANARAYANAN, Panagiotis ANTONOPOULOS, Srinath T.V. SETTY, Daniel John CARROLL, JR., Kiran MUTHABATULLA, Yuanchao SHU, Sanjeev MEHROTRA
-
Publication number: 20240119089Abstract: This document relates to performing live video stream analytics on edge devices. One example determines resources available to the system, and a video analytics configuration is selected that distributes work between edge devices and cloud devices in a cascading manner, where edge device processing is prioritized over cloud processing in order to conserve resources. This example can dynamically modify the allocation of processing depending on changing conditions, such as network availability.Type: ApplicationFiled: December 12, 2023Publication date: April 11, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yuanchao SHU, Shadi NOGHABI, Paramvir BAHL, Landon COX, Alexander CROWN
-
Publication number: 20240104248Abstract: Systems and methods are provided for performing privacy transformation of data to protect privacy in data analytics under the multi-access edge computing environment. In particular, a policy receiver in an edge server receives privacy instructions. Inference determiner in the edge server in a data analytics pipeline receives data from an IoT device and evaluates the data to recognize data associated with personally identifiable information. Privacy data transformer transforms the received data with inference for protecting data privacy by preventing exposure of private information from the edge server. In particular, the privacy data transformer dynamically selects a technique among techniques for removing information that is subject to privacy protection and transforms the received data using the technique.Type: ApplicationFiled: September 28, 2023Publication date: March 28, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Landon Prentice COX, Paramvir BAHL
-
Publication number: 20240096063Abstract: Systems and methods are provided for reusing and retraining an image recognition model for video analytics. The image recognition model is used for inferring a frame of video data that is captured at edge devices. The edge devices periodically or under predetermined conditions transmits a captured frame of video data to perform inferencing. The disclosed technology is directed to select an image recognition model from a model store for reusing or for retraining. A model selector uses a gating network model to determine ranked candidate models for validation. The validation includes iterations of retraining the image recognition model and stopping the iteration when a rate of improving accuracy by retraining becomes smaller than the previous iteration step. Retraining a model includes generating reference data using a teacher model and retraining the model using the reference data. Integrating reuse and retraining of models enables improvement in accuracy and efficiency.Type: ApplicationFiled: December 9, 2022Publication date: March 21, 2024Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yuanchao SHU, Paramvir BAHL, Tsuwang HSIEH
-
Patent number: 11860975Abstract: Provided are aspects relating to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service includes a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.Type: GrantFiled: September 20, 2022Date of Patent: January 2, 2024Assignee: Microsoft Technology Licensing, LLCInventors: Ganesh Ananthanarayanan, Yuanchao Shu, Tsu-wang Hsieh, Nikolaos Karianakis, Paramvir Bahl, Romil Bhardwaj
-
Patent number: 11831698Abstract: Systems and methods are provided for reducing stream data according to a data streaming protocol under a multi-access edge computing. In particular, an IoT device, such as a video image sensing device, may capture stream data and generate inference data by applying a machine-learning model trained to infer data based on the captured stream data. The inference data represents the captured stream data in a reduced data size based on performing data analytics on the captured data. The IoT device formats the inference data according to the data streaming protocol. In contrast to video data compression, the data streaming protocol includes instructions for transmitting the reduced volume of inference data through a data analytics pipeline.Type: GrantFiled: June 29, 2021Date of Patent: November 28, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Ganesh Ananthanarayanan, Yu Yan, Yuanchao Shu
-
Patent number: 11822698Abstract: Systems and methods are provided for performing privacy transformation of data to protect privacy in data analytics under the multi-access edge computing environment. In particular, a policy receiver in an edge server receives privacy instructions. Inference determiner in the edge server in a data analytics pipeline receives data from an IoT device and evaluates the data to recognize data associated with personally identifiable information. Privacy data transformer transforms the received data with inference for protecting data privacy by preventing exposure of private information from the edge server. In particular, the privacy data transformer dynamically selects a technique among techniques for removing information that is subject to privacy protection and transforms the received data using the technique.Type: GrantFiled: June 28, 2021Date of Patent: November 21, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Ganesh Ananthanarayanan, Landon Prentice Cox, Paramvir Bahl
-
Patent number: 11627095Abstract: Computing resources are managed in a computing environment comprising a computing service provider and an edge computing network. The edge computing network comprises computing and storage devices configured to extend computing resources of the computing service provider to remote users of the computing service provider. The edge computing network collects capacity and usage data for computing and network resources at the edge computing network. The capacity and usage data is sent to the computing service provider. Based on the capacity and usage data, the computing service provider, using a cost function, determines a distribution of workloads pertaining to a processing pipeline that has been partitioned into the workloads. The workloads can be executed at the computing service provider or the edge computing network.Type: GrantFiled: June 15, 2021Date of Patent: April 11, 2023Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ganesh Ananthanarayanan, Yuanchao Shu, Paramvir Bahl
-
Publication number: 20230106959Abstract: Identifying, by a sender and for each frame i of a plurality of frames of a video stream, a partition of a set of video data symbols D[i] into a first set of video data symbols U[i] and a second set of video data symbols V[i]. Generating, by the sender and for each frame i, a set of one or more streaming forward error correction (FEC) code parity symbols Px[i] based on the symbols: V[i??] through V[i?1], U[i??], and the symbols D[i], wherein ? is a function of a maximum tolerable latency of the video stream expressed as a whole number of frames. Encoding, by the sender and for each frame i, packets carrying the symbols D[i], and P[i]. Transmitting, by the sender, each frame i of encoded packets in frame order to one or more receivers.Type: ApplicationFiled: September 28, 2022Publication date: April 6, 2023Inventors: Ganesh ANANTHANARAYANAN, Yu YAN, Martin ELLIS, Michael Harrison RUDOW
-
Publication number: 20230062931Abstract: The systems and methods may use a data reduction engine to reduce a volume of input data for machine learning exploration for computer networking related problems. The systems and methods may receive input data related to a network and obtain a network topology. The systems and methods may perform a structured search of a plurality of reduction functions based on a grammar to identify a subset of reduction functions. The systems and methods may generate transformed data by applying the subset of reduction functions to the input data and may determine whether the transformed data meets or exceeds a threshold. The systems and methods may output the transformed data in response to the transformed data meeting or exceeding the threshold.Type: ApplicationFiled: August 24, 2021Publication date: March 2, 2023Inventors: Behnaz ARZANI, Ganesh ANANTHANARAYANAN
-
Publication number: 20230030499Abstract: Examples are disclosed that relate to methods and computing devices for allocating computing resources and selecting hyperparameter configurations during continuous retraining and operation of a machine learning model. In one example, a computing device configured to be located at a network edge between a local network and a cloud service comprises a processor and a memory storing instructions executable by the processor to operate a machine learning model. During a retraining window, a selected portion of a video stream is selected for labeling. At least a portion of a labeled retraining data set is selected for profiling a superset of hyperparameter configurations. For each configuration of the superset of hyperparameter configurations, a profiling test is performed. The profiling test is terminated, and a change in inference accuracy that resulted from the profiling test is extrapolated. Based upon the extrapolated inference accuracies, a set of selected hyperparameter configurations is output.Type: ApplicationFiled: September 20, 2022Publication date: February 2, 2023Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yuanchao SHU, Tsu-wang HSIEH, Nikolaos KARIANAKIS, Paramvir BAHL, Romil BHARDWAJ
-
Publication number: 20220414264Abstract: Systems and methods are provided for performing privacy transformation of data to protect privacy in data analytics under the multi-access edge computing environment. In particular, a policy receiver in an edge server receives privacy instructions. Inference determiner in the edge server in a data analytics pipeline receives data from an IoT device and evaluates the data to recognize data associated with personally identifiable information. Privacy data transformer transforms the received data with inference for protecting data privacy by preventing exposure of private information from the edge server. In particular, the privacy data transformer dynamically selects a technique among techniques for removing information that is subject to privacy protection and transforms the received data using the technique.Type: ApplicationFiled: June 28, 2021Publication date: December 29, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Landon Prentice COX, Paramvir BAHL
-
Publication number: 20220414534Abstract: Systems and methods are provided for continuous learning of models across hierarchies under a multi-access edge computing. In particular, an on-premises edge server, using a model, generates inference data associated with captured stream data. A data drift determiner determines a data drift in the inference data by comparing the data against reference data generated using a golden model. The data drift indicates a loss of accuracy in the inference data. A gateway model maintains one or more models in a model cache for update the model. The gateway model instructs the one or more servers to train the new model. The gateway model transmits the trained model to update the model in the on-premises edge server. Training the new model includes determining an on-premises edge server with computing resources available to train the new model while generating other inference data for incoming stream data in the data analytic pipeline.Type: ApplicationFiled: June 29, 2021Publication date: December 29, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yuanchao SHU, Paramvir BAHL, Tsuwang HSIEH
-
Publication number: 20220417306Abstract: Systems and methods are provided for reducing stream data according to a data streaming protocol under a multi-access edge computing. In particular, an IoT device, such as a video image sensing device, may capture stream data and generate inference data by applying a machine-learning model trained to infer data based on the captured stream data. The inference data represents the captured stream data in a reduced data size based on performing data analytics on the captured data. The IoT device formats the inference data according to the data streaming protocol. In contrast to video data compression, the data streaming protocol includes instructions for transmitting the reduced volume of inference data through a data analytics pipeline.Type: ApplicationFiled: June 29, 2021Publication date: December 29, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Ganesh ANANTHANARAYANAN, Yu YAN, Yuanchao SHU