Patents by Inventor Jamil DHANANI

Jamil DHANANI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240118744
    Abstract: Systems and processes for an integrated sensor framework are provided. For example, a first electronic device receives at least one input including sensor data from a second device. A representation of a physical environment associated with the first electronic device is obtained based on sensor data from the first electronic device and the sensor data from the second device. Movement information corresponding to movement of an object within the physical environment is identified. Event information is determined corresponding to activity within the physical environment, wherein the event information is determined based on the identified movement information and the representation of the physical environment. Accordingly, an output is provided to the user based on the event information.
    Type: Application
    Filed: September 22, 2023
    Publication date: April 11, 2024
    Inventors: Richard T. VAUGHAN, Jamil DHANANI, Juan C. GARCIA, SeyedMehdi MOHAIMENIANPOUR, Geoffrey NAGY, Timothy S. PAEK, Naga Rama Abhishek PRATAPA, Muhammad Amir SHAFIQ
  • Publication number: 20230401486
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Application
    Filed: May 30, 2023
    Publication date: December 14, 2023
    Inventors: Keith P. AVERY, Jamil DHANANI, Harveen KAUR, Varun MAUDGALYA, Timothy S. PAEK, Dmytro RUDCHENKO, Brandt M. WESTING, Minwoo JEONG
  • Patent number: 11704592
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Grant
    Filed: July 23, 2020
    Date of Patent: July 18, 2023
    Assignee: Apple Inc.
    Inventors: Keith P. Avery, Jamil Dhanani, Harveen Kaur, Varun Maudgalya, Timothy S. Paek, Dmytro Rudchenko, Brandt M. Westing, Minwoo Jeong
  • Patent number: 11416136
    Abstract: The present disclosure generally relates to assigning tasks to various user inputs, and detecting and responding to user inputs. In some embodiments, the present disclosure relates to assigning tasks to various user inputs received on a back surface of a device, and detecting and responding to user inputs on the back surface of the device.
    Type: Grant
    Filed: February 4, 2021
    Date of Patent: August 16, 2022
    Assignee: Apple Inc.
    Inventors: John M. Nefulda, Keith P. Avery, Madhu Chinthakunta, Christopher B. Fleizach, Varun Maudgalya, Sommer E. Panage, Xinyi Yan, Garrett L. Weinberg, Michal K. Wegrzynski, William Caruso, Kenneth S. Friedman, Jamil Dhanani, Muhammad Amir Shafiq, Minwoo Jeong, Timothy S. Paek, Viet Huy Le, Heriberto Nieto, Brandt M. Westing, Rishabh Yadav
  • Patent number: 11175898
    Abstract: The subject technology receives a neural network model in a model format, the model format including information for a set of layers of the neural network model, each layer of the set of layers including a set of respective operations. The subject technology generates neural network (NN) code from the neural network model, the NN code being in a programming language distinct from the model format, and the NN code comprising a respective memory allocation for each respective layer of the set of layers of the neural network model, where the generating comprises determining the respective memory allocation for each respective layer based at least in part on a resource constraint of a target device. The subject technology compiles the NN code into a binary format. The subject technology generates a package for deploying the compiled NN code on the target device.
    Type: Grant
    Filed: September 25, 2019
    Date of Patent: November 16, 2021
    Assignee: Apple Inc.
    Inventors: Timothy S. Paek, Francesco Rossi, Jamil Dhanani, Keith P. Avery, Minwoo Jeong, Xiaojin Shi, Harveen Kaur, Brandt M. Westing
  • Publication number: 20210027199
    Abstract: The subject technology receives, from a first sensor of a device, first sensor output of a first type. The subject technology receives, from a second sensor of the device, second sensor output of a second type, the first and second sensors being non-touch sensors. The subject technology provides the first sensor output and the second sensor output as inputs to a machine learning model, the machine learning model having been trained to output a predicted touch-based gesture based on sensor output of the first type and sensor output of the second type. The subject technology provides a predicted touch-based gesture based on output from the machine learning model. Further, the subject technology adjusts an audio output level of the device based on the predicted gesture, and where the device is an audio output device.
    Type: Application
    Filed: July 23, 2020
    Publication date: January 28, 2021
    Inventors: Keith P. AVERY, Jamil DHANANI, Harveen KAUR, Varun MAUDGALYA, Timothy S. PAEK, Dmytro RUDCHENKO, Brandt M. WESTING, Minwoo JEONG
  • Publication number: 20200379740
    Abstract: The subject technology receives a neural network model in a model format, the model format including information for a set of layers of the neural network model, each layer of the set of layers including a set of respective operations. The subject technology generates neural network (NN) code from the neural network model, the NN code being in a programming language distinct from the model format, and the NN code comprising a respective memory allocation for each respective layer of the set of layers of the neural network model, where the generating comprises determining the respective memory allocation for each respective layer based at least in part on a resource constraint of a target device. The subject technology compiles the NN code into a binary format. The subject technology generates a package for deploying the compiled NN code on the target device.
    Type: Application
    Filed: September 25, 2019
    Publication date: December 3, 2020
    Inventors: Timothy S. PAEK, Francesco ROSSI, Jamil DHANANI, Keith P. AVERY, Minwoo JEONG, Xiaojin SHI, Harveen KAUR, Brandt M. WESTING