Patents by Inventor Aravind Ravi

Aravind Ravi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12228395
    Abstract: Methods and apparatus for substrate position calibration for substrate supports in substrate processing systems are provided herein. In some embodiments, a method for positioning a substrate on a substrate support includes: obtaining a plurality of backside pressure values corresponding to a plurality of different substrate positions on a substrate support by repeatedly placing a substrate in a position on the substrate support, and vacuum chucking the substrate to the substrate support and measuring a backside pressure; and analyzing the plurality of backside pressure values to determine a calibrated substrate position.
    Type: Grant
    Filed: November 19, 2021
    Date of Patent: February 18, 2025
    Assignee: APPLIED MATERIALS, INC.
    Inventors: Tomoharu Matsushita, Aravind Kamath, Jallepally Ravi, Cheng-Hsiung Tsai, Hiroyuki Takahama
  • Publication number: 20240288940
    Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.
    Type: Application
    Filed: May 3, 2024
    Publication date: August 29, 2024
    Applicant: Cognixion Corporation
    Inventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Joseph Andreas Forsland, Christopher Jason Ullrich
  • Patent number: 12008162
    Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.
    Type: Grant
    Filed: April 5, 2022
    Date of Patent: June 11, 2024
    Assignee: COGNIXION CORPORATION
    Inventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Andreas Forsland, Chris Ullrich
  • Publication number: 20230309887
    Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.
    Type: Application
    Filed: May 24, 2023
    Publication date: October 5, 2023
    Inventors: Christopher AIMONE, Graeme MOFFAT, Hubert JACOB BANVILLE, Sean WOOD, Subash PADMANABAN, Sam KERR, Aravind RAVI
  • Patent number: 11696714
    Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: July 11, 2023
    Assignee: INTERAXON INC.
    Inventors: Christopher Allen Aimone, Graeme Moffat, Hubert Jacob Banville, Sean Wood, Subash Padmanaban, Sam Kerr, Aravind Ravi
  • Publication number: 20220326771
    Abstract: A method and system are disclosed using steady-state motion visual evoked potential stimuli in an augmented reality environment. Requested stimuli data are received from a user application on a smart device. Sensor data and other context data are also received, where other context data includes data that is un-sensed. The requested stimuli data are transformed into modified stimuli based on the sensor data, and the other context data. Modified stimuli and environmental stimuli are presented to the user with a rendering device configured to mix the modified stimuli and the environmental stimuli, thereby resulting in rendered stimuli. Biosignals generated in response to the rendered stimuli are received from the user to a wearable biosignal sensing device. Received biosignals are classified based on the modified stimuli, resulting in a classified selection, which is returned to the user application.
    Type: Application
    Filed: April 5, 2022
    Publication date: October 13, 2022
    Applicant: Cognixion Corporation
    Inventors: Sarah Pearce, Aravind Ravi, Jing Lu, Ning Jiang, Andreas Forsland, Chris Ullrich
  • Patent number: 11049040
    Abstract: Disclosed subject matter relates to supervised machine learning including a method and system for generating synchronized labelled training dataset for building a learning model. The training data generation system determines a timing advance factor to achieve time synchronization between User Equipment (UE) and network nodes, by signalling the UE to initiate playback of the multimedia content based on the timing advance factor. The training data generation system receives network Key Performance Indicator (KPI) data from the network nodes and a user experience data from the UE, concurrently, for the streamed multimedia content, and performs timestamp based correlation to generate a synchronized labelled training dataset for building a learning model.
    Type: Grant
    Filed: March 26, 2018
    Date of Patent: June 29, 2021
    Assignee: Wipro Limited
    Inventors: Subhas Chandra Mondal, Aravind Ravi, Pallavi Suresh Mastiholimath
  • Publication number: 20200337625
    Abstract: Brain modelling includes receiving time-coded bio-signal data associated with a user; receiving time-coded stimulus event data; projecting the time-coded bio-signal data into a lower dimensioned feature space; extracting features from the lower dimensioned feature space that correspond to time codes of the time-coded stimulus event data to identify a brain response; generating a training data set for the brain response using the features; training a brain model using the training set, the brain model unique to the user; generating a brain state prediction for the user output from the trained brain model, and automatically computing similarity metrics of the brain model as compared to other user data; and inputting the brain state prediction to a feedback model to determine a feedback stimulus for the user, wherein the feedback model is associated with a target brain state.
    Type: Application
    Filed: April 24, 2020
    Publication date: October 29, 2020
    Inventors: Christopher Allen Aimone, Graeme Moffat, Hubert JACOB BANVILLE, Sean Wood, Subash PADMANABAN, Sam KERR, Aravind RAVI
  • Publication number: 20190287031
    Abstract: Disclosed subject matter relates to supervised machine learning including a method and system for generating synchronized labelled training dataset for building a learning model. The training data generation system determines a timing advance factor to achieve time synchronization between User Equipment (UE) and network nodes, by signalling the UE to initiate playback of the multimedia content based on the timing advance factor. The training data generation system receives network Key Performance Indicator (KPI) data from the network nodes and a user experience data from the UE, concurrently, for the streamed multimedia content, and performs timestamp based correlation to generate a synchronized labelled training dataset for building a learning model.
    Type: Application
    Filed: March 26, 2018
    Publication date: September 19, 2019
    Inventors: Subhas Chandra Mondal, Aravind Ravi, Pallavi Suresh Mastiholimath
  • Publication number: 20150113364
    Abstract: The present disclosure relates to document generation, and more particularly to a system and method for generating an audio-animated document. In one embodiment, a method for generating an audio-animated document is disclosed, comprising: obtaining an extensible markup language (XML) file from a database, wherein the XML file comprises data corresponding to transactional activities over a time interval; identifying a set of phrases and one or more images from a resource library based on the XML file; generating a playback text using the set of phrases, the one or more images, the data, and a set of rules; providing one or more audio files corresponding to the playback text; and generating the audio-animated document based on the data, the one or more images, and the one or more audio files.
    Type: Application
    Filed: October 21, 2013
    Publication date: April 23, 2015
    Inventors: Vidya Sagar THATIPARTHI, Aravind Ravi