Patents by Inventor Aditya Sankar

Aditya Sankar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230194115
    Abstract: Changing environmental characteristics of an enclosure are controlled to promote health, wellness, and/or performance for occupant(s) of the enclosure using sensor data, three dimensional modeling, physical properties of the enclosure, and machine learning (e.g., Artificial Intelligence).
    Type: Application
    Filed: May 21, 2021
    Publication date: June 22, 2023
    Inventors: Ajay Malik, Mingzhu Lu, Keivan Ebrahimi, Luis Miguel Candanedo Ibarra, Bhabani Sankar Nayak, Anurag Gupta, Nitesh Trikha, Brandon Dillan Tinianov, Aditya Dayal
  • Publication number: 20230094061
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide measurements of objects based on a location of a surface of the objects. An exemplary process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generating a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, determining a class of the object based on the 3D semantic data, determining a location of a surface of the object based on the class of the object, the location determined by identifying a plane within the 3D bounding box having semantics in the 3D semantic data satisfying surface criteria for the object, and providing a measurement of the object, the measurement of the object determined based on the location of the surface of the object.
    Type: Application
    Filed: December 1, 2022
    Publication date: March 30, 2023
    Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
  • Patent number: 11574485
    Abstract: Various implementations disclosed herein include devices, systems, and methods that obtain a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generate a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, classify the object based on the 3D bounding box and the 3D semantic data, and display a measurement of the object, where the measurement of the object is determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.
    Type: Grant
    Filed: January 14, 2021
    Date of Patent: February 7, 2023
    Assignee: Apple Inc.
    Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
  • Publication number: 20220262025
    Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a wrist measurement or watch band size using depth data captured by a depth sensor from one or more rotational orientations of the wrist. In some implementations, depth data captured by a depth sensor including at least two depth map images of a wrist from different angles is obtained. In some implementations, an output is generated based on inputting the depth data into a machine learning model, the output corresponding to circumference of the wrist or a watch band size of the wrist. Then, a watch band size recommendation is provided based on the output.
    Type: Application
    Filed: February 14, 2022
    Publication date: August 18, 2022
    Inventors: Aditya Sankar, Qi Shan, Shreyas V. Joshi, David Guera Cobo, Fareeha lrfan, Bryan M. Perfetti
  • Publication number: 20210248811
    Abstract: The subject technology provides a framework for learning neural scene representations directly from images, without three-dimensional (3D) supervision, by a machine-learning model. In the disclosed systems and methods, 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. For example, a loss function can be provided which enforces equivariance of the scene representation with respect to 3D rotations. Because naive tensor rotations may not be used to define models that are equivariant with respect to 3D rotations, a new operation called an invertible shear rotation is disclosed, which has the desired equivariance property. In some implementations, the model can be used to generate a 3D representation, such as mesh, of an object from an image of the object.
    Type: Application
    Filed: January 8, 2021
    Publication date: August 12, 2021
    Inventors: Qi SHAN, Joshua SUSSKIND, Aditya SANKAR, Robert Alex COLBURN, Emilien DUPONT, Miguel Angel BAUTISTA MARTIN
  • Publication number: 20210224516
    Abstract: Various implementations disclosed herein include devices, systems, and methods that obtain a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generate a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, classify the object based on the 3D bounding box and the 3D semantic data, and display a measurement of the object, where the measurement of the object is determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.
    Type: Application
    Filed: January 14, 2021
    Publication date: July 22, 2021
    Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
  • Patent number: 10409836
    Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.
    Type: Grant
    Filed: June 20, 2016
    Date of Patent: September 10, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Aditya Sankar, William Lawrence Portnoy
  • Patent number: 9888215
    Abstract: An indoor scene capture system is provided that, with a handheld device with a camera, collects videos of rooms, spatially indexes the frames of the videos, marks doorways between rooms, and collects videos of transitions from room to room via doorways. The indoor scene capture system may assign a direction to at least some of the frames based on the angle of rotation as determined by an inertial sensor (e.g., gyroscope) of the handheld device. The indoor scene capture system marks doorways within the frames of the videos. For each doorway between rooms, the indoor scene capture system collects a video of transitioning through the doorway as the camera moves from the point within a room through the doorway to a point within the adjoining room.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: February 6, 2018
    Assignee: University of Washington
    Inventors: Aditya Sankar, Steven Maxwell Seitz
  • Publication number: 20160299959
    Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.
    Type: Application
    Filed: June 20, 2016
    Publication date: October 13, 2016
    Applicant: Microsoft Corporation
    Inventors: Aditya Sankar, William Lawrence Portnoy
  • Patent number: 9389681
    Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.
    Type: Grant
    Filed: December 19, 2011
    Date of Patent: July 12, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Aditya Sankar, William Lawrence Portnoy
  • Patent number: 9092437
    Abstract: “Experience Streams” (ESs) are used by a “rich interactive narrative” (RIN) data model as basic building blocks that are combined in a variety of ways to enable or construct a large number of RIN scenarios for presenting interactive narratives to the user. In general various ES types contain all the information required to define and populate a particular RIN, as well as the information (in the form of a series of navigable states) that charts an animated and interactive course through each RIN. In other words, combinations of various ES provide a scripted path through a RIN environment, as well as various UI controls and/or toolbars that enable user interaction with the interactive narrative provided by each RIN. Example ESs include, but are not limited, content browser experience streams, zoomable media experience streams, relationship graph experience streams, player-controls/toolbar experience streams, etc.
    Type: Grant
    Filed: January 18, 2011
    Date of Patent: July 28, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Joseph M. Joy, Narendranath Datha, Eric J. Stollnitz, Aditya Sankar, Vinay Krishnaswamy, Sujith Radhakrishnan Warrier, Kanchen Rajanna, Tanuja Abhay Joshi
  • Publication number: 20140320661
    Abstract: An indoor scene capture system is provided that, with a handheld device with a camera, collects videos of rooms, spatially indexes the frames of the videos, marks doorways between rooms, and collects videos of transitions from room to room via doorways. The indoor scene capture system may assign a direction to at least some of the frames based on the angle of rotation as determined by an inertial sensor (e.g., gyroscope) of the handheld device. The indoor scene capture system marks doorways within the frames of the videos. For each doorway between rooms, the indoor scene capture system collects a video of transitioning through the doorway as the camera moves from the point within a room through the doorway to a point within the adjoining room.
    Type: Application
    Filed: April 25, 2014
    Publication date: October 30, 2014
    Applicant: University of Washington through its Center for Commercialization
    Inventors: Aditya Sankar, Steven Maxwell Seitz
  • Publication number: 20130159350
    Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.
    Type: Application
    Filed: December 19, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Aditya Sankar, William Lawrence Portnoy
  • Patent number: 8046691
    Abstract: A multimedia system specifies a multimedia experience stream by a narrative definition that defines a narrative presentation having sub-narratives. Each sub-narrative may incorporate multiple streams of different types of media with each stream defining a “path” through content of that media type. The multimedia system directs the presentation of the sub-narratives by interfacing with presentation components for each media type through a custom interface component for that media type that implements a common application interface. When a user pauses a presentation, the user can manually navigate around the content of a stream from the current location at the time of the pause to another location. When the user resumes the presentation, the multimedia system automatically transitions from the navigated-to location to the current location at the time of the pause to resume the presentation from where it was paused.
    Type: Grant
    Filed: December 31, 2008
    Date of Patent: October 25, 2011
    Assignee: Microsoft Corporation
    Inventors: Aditya Sankar, Archana Prasad, Narendranath D. Govindachetty, Joseph Joy
  • Publication number: 20110113334
    Abstract: “Experience Streams” (ESs) are used by a “rich interactive narrative” (RIN) data model as basic building blocks that are combined in a variety of ways to enable or construct a large number of RIN scenarios for presenting interactive narratives to the user. In general various ES types contain all the information required to define and populate a particular RIN, as well as the information (in the form of a series of navigable states) that charts an animated and interactive course through each RIN. In other words, combinations of various ES provide a scripted path through a RIN environment, as well as various UI controls and/or toolbars that enable user interaction with the interactive narrative provided by each RIN. Example ESs include, but are not limited, content browser experience streams, zoomable media experience streams, relationship graph experience streams, player-controls/toolbar experience streams, etc.
    Type: Application
    Filed: January 18, 2011
    Publication date: May 12, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Joseph M. Joy, Narendranath Datha, Eric J. Stollnitz, Aditya Sankar, Vinay Krishnaswamy, Sujith Radhakrishnan Warrier, Kanchen Rajanna, Tanuja Abhay Joshi
  • Publication number: 20100169776
    Abstract: A multimedia system specifies a multimedia experience stream by a narrative definition that defines a narrative presentation having sub-narratives. Each sub-narrative may incorporate multiple streams of different types of media with each stream defining a “path” through content of that media type. The multimedia system directs the presentation of the sub-narratives by interfacing with presentation components for each media type through a custom interface component for that media type that implements a common application interface. When a user pauses a presentation, the user can manually navigate around the content of a stream from the current location at the time of the pause to another location. When the user resumes the presentation, the multimedia system automatically transitions from the navigated-to location to the current location at the time of the pause to resume the presentation from where it was paused.
    Type: Application
    Filed: December 31, 2008
    Publication date: July 1, 2010
    Applicant: Microsoft Corporation
    Inventors: Aditya Sankar, Archana Prasad, Narendranath D. Govindachetty, Joseph Joy