Patents by Inventor Utkarsh GAUR

Utkarsh GAUR has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250077323
    Abstract: In some examples, a method for developing an application programming interface (API) is provided. The method is performed by a computing system running an API Forge Accelerator. The method comprising: receiving, by the API Forge Accelerator, a request for generating an API for a service based on a type of a first gateway, wherein the request indicates a selection of the first gateway among a plurality of gateways integrated with the API Forge Accelerator; generating, by the API Forge Accelerator, based on the specific gateway, a specification template corresponding to the specific gateway for generating the API, wherein the specification template indicates specification information that is used for the API generation; obtaining, by the API Forge Accelerator, based on the specification template, specification information from user input and API tools; and generating, by the API Forge Accelerator, based on the specification information, the API for the service.
    Type: Application
    Filed: August 31, 2023
    Publication date: March 6, 2025
    Inventors: Dinesh Prabhu Palanivel, Debadipta Basu, Srikant Rajagopalan, Hemalatha Ravishankar, Reema Sharma, Rinu G. Dhanaraj, Gabriel Ferreri, Utkarsh Gaur, Michael Tripp, Siddharth Dubey, Yanay Ibanez, Rohith Yeedulapalli, Brad Cain, Anubha Gaur, Wade Prestridge
  • Patent number: 12126667
    Abstract: Systems and method for streaming video content include downscaling video content using a downscaling model to generate downscaled video content and downloading the downscaled video content as a video stream and corresponding upscaling model to a client device. The system converts received video frames to a video memory format comprising channels having the same memory allocation size, each subsequent channel arranged in an adjacent memory location, for input to the downscaling model. The client device upscales the video stream using the received upscaling model for display by the client device in real-time. A training system trains the downscaling model to generate the downscaled video content, based on associated metadata identifying a type of video content. The downscaled video content and associated upscaling models are stored for access by an edge server, which downloads upscaling models to a client device to select an upscaling model.
    Type: Grant
    Filed: August 9, 2023
    Date of Patent: October 22, 2024
    Assignee: Synaptics Incorporated
    Inventors: Vladan Petrovic, Utkarsh Gaur, Pontus Lidman
  • Patent number: 11907475
    Abstract: In some examples, an electronic device can use machine learning techniques, such as convolutional neural networks, to estimate the distance between a stylus tip and a touch sensitive surface (e.g., stylus z-height). A subset of stylus data sensed at electrodes closest to the location of the stylus at the touch sensitive surface including data having multiple phases and frequencies can be provided to the machine learning algorithm. The estimated stylus z-height can be compared to one or more thresholds to determine whether or not the stylus is in contact with the touch sensitive surface. In some examples, the electronic device can use machine learning techniques to estimate the (x, y) position and/or tilt and/or azimuth angles of the stylus tip at the touch sensitive surface based on a subset of stylus data.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: February 20, 2024
    Assignee: Apple Inc.
    Inventors: Hojjat Seyed Mousavi, Behrooz Shahsavari, Bongsoo Suh, Utkarsh Gaur, Nima Ferdosi, Baboo V. Gowreesunker
  • Publication number: 20230396665
    Abstract: Systems and method for streaming video content include downscaling video content using a downscaling model to generate downscaled video content and downloading the downscaled video content as a video stream and corresponding upscaling model to a client device. The system converts received video frames to a video memory format comprising channels having the same memory allocation size, each subsequent channel arranged in an adjacent memory location, for input to the downscaling model. The client device upscales the video stream using the received upscaling model for display by the client device in real-time. A training system trains the downscaling model to generate the downscaled video content, based on associated metadata identifying a type of video content. The downscaled video content and associated upscaling models are stored for access by an edge server, which downloads upscaling models to a client device to select an upscaling model.
    Type: Application
    Filed: August 9, 2023
    Publication date: December 7, 2023
    Applicant: Synaptics Incorporated
    Inventors: Vladan PETROVIC, Utkarsh GAUR, Pontus LIDMAN
  • Patent number: 11785068
    Abstract: Systems and method for streaming video content include downscaling video content using a downscaling model to generate downscaled video content and downloading the downscaled video content as a video stream and corresponding upscaling model to a client device. The system converts received video frames to a video memory format comprising channels having the same memory allocation size, each subsequent channel arranged in an adjacent memory location, for input to the downscaling model. The client device upscales the video stream using the received upscaling model for display by the client device in real-time. A training system trains the downscaling model to generate the downscaled video content, based on associated metadata identifying a type of video content. The downscaled video content and associated upscaling models are stored for access by an edge server, which downloads upscaling models to a client device to select an upscaling model.
    Type: Grant
    Filed: December 31, 2020
    Date of Patent: October 10, 2023
    Assignee: Synaptics Incorporated
    Inventors: Vladan Petrovic, Utkarsh Gaur, Pontus Lidman
  • Patent number: 11589120
    Abstract: A method and apparatus for deep content tagging. A media device receives one or more first frames of a content item, where the one or more first frames spans a duration of a scene in the content item. The media device detects one or more objects or features in each of the first frames using a neural network model and identifies one or more first genres associated with the first frames based at least in part on the detected objects or features in each of the first frames. The media device further controls playback of the content item based at least in part on the identified first genres.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: February 21, 2023
    Assignee: Synaptics Incorporated
    Inventors: Utkarsh Gaur, Adil Ilyas Jagmag, Gaurav Arora
  • Publication number: 20220210213
    Abstract: Systems and method for streaming video content include downscaling video content using a downscaling model to generate downscaled video content and downloading the downscaled video content as a video stream and corresponding upscaling model to a client device. The system converts received video frames to a video memory format comprising channels having the same memory allocation size, each subsequent channel arranged in an adjacent memory location, for input to the downscaling model. The client device upscales the video stream using the received upscaling model for display by the client device in real-time. A training system trains the downscaling model to generate the downscaled video content, based on associated metadata identifying a type of video content. The downscaled video content and associated upscaling models are stored for access by an edge server, which downloads upscaling models to a client device to select an upscaling model.
    Type: Application
    Filed: December 31, 2020
    Publication date: June 30, 2022
    Inventors: Vladan PETROVIC, Utkarsh GAUR, Pontus LIDMAN
  • Publication number: 20220100341
    Abstract: In some examples, an electronic device can use machine learning techniques, such as convolutional neural networks, to estimate the distance between a stylus tip and a touch sensitive surface (e.g., stylus z-height). A subset of stylus data sensed at electrodes closest to the location of the stylus at the touch sensitive surface including data having multiple phases and frequencies can be provided to the machine learning algorithm. The estimated stylus z-height can be compared to one or more thresholds to determine whether or not the stylus is in contact with the touch sensitive surface. In some examples, the electronic device can use machine learning techniques to estimate the (x, y) position and/or tilt and/or azimuth angles of the stylus tip at the touch sensitive surface based on a subset of stylus data.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 31, 2022
    Inventors: Hojjat SEYED MOUSAVI, Behrooz SHAHSAVARI, Bongsoo SUH, Utkarsh GAUR, Nima FERDOSI, Baboo V. GOWREESUNKER
  • Publication number: 20220100310
    Abstract: In some examples, an electronic device can use machine learning techniques, such as convolutional neural networks, to estimate the distance between a stylus tip and a touch sensitive surface (e.g., stylus z-height). A subset of stylus data sensed at electrodes closest to the location of the stylus at the touch sensitive surface including data having multiple phases and frequencies can be provided to the machine learning algorithm. The estimated stylus z-height can be compared to one or more thresholds to determine whether or not the stylus is in contact with the touch sensitive surface.
    Type: Application
    Filed: January 28, 2021
    Publication date: March 31, 2022
    Inventors: Behrooz SHAHSAVARI, Bongsoo SUH, Utkarsh GAUR, Nima FERDOSI, Baboo V. GOWREESUNKER
  • Patent number: 11287926
    Abstract: In some examples, an electronic device can use machine learning techniques, such as convolutional neural networks, to estimate the distance between a stylus tip and a touch sensitive surface (e.g., stylus z-height). A subset of stylus data sensed at electrodes closest to the location of the stylus at the touch sensitive surface including data having multiple phases and frequencies can be provided to the machine learning algorithm. The estimated stylus z-height can be compared to one or more thresholds to determine whether or not the stylus is in contact with the touch sensitive surface.
    Type: Grant
    Filed: January 28, 2021
    Date of Patent: March 29, 2022
    Assignee: Apple Inc.
    Inventors: Behrooz Shahsavari, Bongsoo Suh, Utkarsh Gaur, Nima Ferdosi, Baboo V. Gowreesunker
  • Patent number: 11120569
    Abstract: A method and apparatus for estimating a user's head pose relative to a sensing device. The sensing device detects a face of the user in an image. The sensing device further identifies a plurality of points in the image corresponding to respective features of the detected face. The plurality of points includes at least a first point corresponding to a location of a first facial feature. The sensing device determines a position of the face relative to the sensing device based at least in part on a distance between the first point in the image and one or more of the remaining points. For example, the sensing device may determine a pitch, yaw, distance, or location of the user's face relative to the sensing device.
    Type: Grant
    Filed: June 24, 2019
    Date of Patent: September 14, 2021
    Assignee: Synaptics Incorporated
    Inventors: Boyan Ivanov Bonev, Utkarsh Gaur
  • Patent number: 11079911
    Abstract: A method and apparatus for device personalization. A device is configured to receive first sensor data from one or more sensors, detect biometric information in the first sensor data, encode the biometric information as a first vector using one or more neural network models stored on the device, and configure a user interface of the device based at least in part on the first vector. For example, the profile information may include configurations, settings, preferences, or content to be displayed or rendered via the user interface. In some implementations, the first sensor data may comprise an image of a scene and the biometric information may comprise one or more facial features of a user in the scene.
    Type: Grant
    Filed: August 28, 2019
    Date of Patent: August 3, 2021
    Assignee: SYNAPTICS INCORPORATED
    Inventors: Utkarsh Gaur, Gaurav Arora
  • Patent number: 11082460
    Abstract: Systems and methods for audio signal enhancement facilitated using video data are provided. In one example, a method includes receiving a multi-channel audio signal including audio inputs detected by a plurality of audio input devices. The method further includes receiving an image captured by a video input device. The method further includes determining a first signal based at least in part on the image. The first signal is indicative of a likelihood associated with a target audio source. The method further includes determining a second signal based at least in part on the multi-channel audio signal and the first signal. The second signal is indicative of a likelihood associated with an audio component attributed to the target audio source. The method further includes processing the multi-channel audio signal based at least in part on the second signal to generate an output audio signal.
    Type: Grant
    Filed: June 27, 2019
    Date of Patent: August 3, 2021
    Assignee: SYNAPTICS INCORPORATED
    Inventors: Francesco Nesta, Boyan Bonev, Utkarsh Gaur
  • Publication number: 20200412772
    Abstract: Systems and methods for audio signal enhancement facilitated using video data are provided. In one example, a method includes receiving a multi-channel audio signal including audio inputs detected by a plurality of audio input devices. The method further includes receiving an image captured by a video input device. The method further includes determining a first signal based at least in part on the image. The first signal is indicative of a likelihood associated with a target audio source. The method further includes determining a second signal based at least in part on the multi-channel audio signal and the first signal. The second signal is indicative of a likelihood associated with an audio component attributed to the target audio source. The method further includes processing the multi-channel audio signal based at least in part on the second signal to generate an output audio signal.
    Type: Application
    Filed: June 27, 2019
    Publication date: December 31, 2020
    Inventors: Francesco Nesta, Boyan Bonev, Utkarsh Gaur
  • Publication number: 20200402253
    Abstract: A method and apparatus for estimating a user's head pose relative to a sensing device. The sensing device detects a face of the user in an image. The sensing device further identifies a plurality of points in the image corresponding to respective features of the detected face. The plurality of points includes at least a first point corresponding to a location of a first facial feature. The sensing device determines a position of the face relative to the sensing device based at least in part on a distance between the first point in the image and one or more of the remaining points. For example, the sensing device may determine a pitch, yaw, distance, or location of the user's face relative to the sensing device.
    Type: Application
    Filed: June 24, 2019
    Publication date: December 24, 2020
    Inventors: Boyan IVANOV BONEV, Utkarsh GAUR
  • Publication number: 20200273485
    Abstract: A method and apparatus for user engagement detection. A media device captures sensor data via one or more sensors while concurrently playing back a first content item. The media device detects one or more reactions to the first content item by one or more users based at least in part on the sensor data and controls a media playback interface used to play back the first content item based at least in part on the detected reactions.
    Type: Application
    Filed: February 24, 2020
    Publication date: August 27, 2020
    Inventors: Adil Ilyas Jagmag, Utkarsh Gaur, Gaurav Arora
  • Publication number: 20200275158
    Abstract: A method and apparatus for deep content tagging. A media device receives one or more first frames of a content item, where the one or more first frames spans a duration of a scene in the content item. The media device detects one or more objects or features in each of the first frames using a neural network model and identifies one or more first genres associated with the first frames based at least in part on the detected objects or features in each of the first frames. The media device further controls playback of the content item based at least in part on the identified first genres.
    Type: Application
    Filed: February 24, 2020
    Publication date: August 27, 2020
    Inventors: Utkarsh Gaur, Adil Ilyas Jagmag, Gaurav Arora
  • Publication number: 20200210035
    Abstract: A method and apparatus for device personalization. A device is configured to receive first sensor data from one or more sensors, detect biometric information in the first sensor data, encode the biometric information as a first vector using one or more neural network models stored on the device, and configure a user interface of the device based at least in part on the first vector. For example, the profile information may include configurations, settings, preferences, or content to be displayed or rendered via the user interface. In some implementations, the first sensor data may comprise an image of a scene and the biometric information may comprise one or more facial features of a user in the scene.
    Type: Application
    Filed: August 28, 2019
    Publication date: July 2, 2020
    Inventors: Utkarsh GAUR, Gaurav ARORA