Patents by Inventor Aravind Ravi
Aravind Ravi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12228395Abstract: Methods and apparatus for substrate position calibration for substrate supports in substrate processing systems are provided herein. In some embodiments, a method for positioning a substrate on a substrate support includes: obtaining a plurality of backside pressure values corresponding to a plurality of different substrate positions on a substrate support by repeatedly placing a substrate in a position on the substrate support, and vacuum chucking the substrate to the substrate support and measuring a backside pressure; and analyzing the plurality of backside pressure values to determine a calibrated substrate position.Type: GrantFiled: November 19, 2021Date of Patent: February 18, 2025Assignee: APPLIED MATERIALS, INC.Inventors: Tomoharu Matsushita, Aravind Kamath, Jallepally Ravi, Cheng-Hsiung Tsai, Hiroyuki Takahama
-
Publication number: 20240288940Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.Type: ApplicationFiled: May 3, 2024Publication date: August 29, 2024Applicant: Cognixion CorporationInventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Joseph Andreas Forsland, Christopher Jason Ullrich
-
Patent number: 12008162Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.Type: GrantFiled: April 5, 2022Date of Patent: June 11, 2024Assignee: COGNIXION CORPORATIONInventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Andreas Forsland, Chris Ullrich
-
Publication number: 20230309887Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.Type: ApplicationFiled: May 24, 2023Publication date: October 5, 2023Inventors: Christopher AIMONE, Graeme MOFFAT, Hubert JACOB BANVILLE, Sean WOOD, Subash PADMANABAN, Sam KERR, Aravind RAVI
-
Patent number: 11696714Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.Type: GrantFiled: April 24, 2020Date of Patent: July 11, 2023Assignee: INTERAXON INC.Inventors: Christopher Allen Aimone, Graeme Moffat, Hubert Jacob Banville, Sean Wood, Subash Padmanaban, Sam Kerr, Aravind Ravi
-
Publication number: 20220326771Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.Type: ApplicationFiled: April 5, 2022Publication date: October 13, 2022Applicant: Cognixion CorporationInventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Andreas Forsland, Chris Ullrich
-
Patent number: 11049040Abstract: Disclosed subject matter relates to supervised machine learning including a method and system for generating synchronized labelled training dataset for building a learning model. The training data generation system determines a timing advance factor to achieve time synchronization between User Equipment (UE) and network nodes, by signalling the UE to initiate playback of the multimedia content based on the timing advance factor. The training data generation system receives network Key Performance Indicator (KPI) data from the network nodes and a user experience data from the UE, concurrently, for the streamed multimedia content, and performs timestamp based correlation to generate a synchronized labelled training dataset for building a learning model.Type: GrantFiled: March 26, 2018Date of Patent: June 29, 2021Assignee: Wipro LimitedInventors: Subhas Chandra Mondal, Aravind Ravi, Pallavi Suresh Mastiholimath
-
Publication number: 20200337625Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.Type: ApplicationFiled: April 24, 2020Publication date: October 29, 2020Inventors: Christopher Allen Aimone, Graeme Moffat, Hubert JACOB BANVILLE, Sean Wood, Subash PADMANABAN, Sam KERR, Aravind RAVI
-
Publication number: 20190287031Abstract: Disclosed subject matter relates to supervised machine learning including a method and system for generating synchronized labelled training dataset for building a learning model. The training data generation system determines a timing advance factor to achieve time synchronization between User Equipment (UE) and network nodes, by signalling the UE to initiate playback of the multimedia content based on the timing advance factor. The training data generation system receives network Key Performance Indicator (KPI) data from the network nodes and a user experience data from the UE, concurrently, for the streamed multimedia content, and performs timestamp based correlation to generate a synchronized labelled training dataset for building a learning model.Type: ApplicationFiled: March 26, 2018Publication date: September 19, 2019Inventors: Subhas Chandra Mondal, Aravind Ravi, Pallavi Suresh Mastiholimath
-
Publication number: 20150113364Abstract: The present disclosure relates to document generation, and more particularly to a system and method for generating an audio-animated document. In one embodiment, a method for generating an audio-animated document is disclosed, comprising: obtaining an extensible markup language (XML) file from a database, wherein the XML file comprises data corresponding to transactional activities over a time interval; identifying a set of phrases and one or more images from a resource library based on the XML file; generating a playback text using the set of phrases, the one or more images, the data, and a set of rules; providing one or more audio files corresponding to the playback text; and generating the audio-animated document based on the data, the one or more images, and the one or more audio files.Type: ApplicationFiled: October 21, 2013Publication date: April 23, 2015Inventors: Vidya Sagar THATIPARTHI, Aravind Ravi