Patents by Inventor Aditya Sankar
Aditya Sankar has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230194115Abstract: Changing environmental characteristics of an enclosure are controlled to promote health, wellness, and/or performance for occupant(s) of the enclosure using sensor data, three dimensional modeling, physical properties of the enclosure, and machine learning (e.g., Artificial Intelligence).Type: ApplicationFiled: May 21, 2021Publication date: June 22, 2023Inventors: Ajay Malik, Mingzhu Lu, Keivan Ebrahimi, Luis Miguel Candanedo Ibarra, Bhabani Sankar Nayak, Anurag Gupta, Nitesh Trikha, Brandon Dillan Tinianov, Aditya Dayal
-
Publication number: 20230094061Abstract: Various implementations disclosed herein include devices, systems, and methods that provide measurements of objects based on a location of a surface of the objects. An exemplary process may include obtaining a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generating a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, determining a class of the object based on the 3D semantic data, determining a location of a surface of the object based on the class of the object, the location determined by identifying a plane within the 3D bounding box having semantics in the 3D semantic data satisfying surface criteria for the object, and providing a measurement of the object, the measurement of the object determined based on the location of the surface of the object.Type: ApplicationFiled: December 1, 2022Publication date: March 30, 2023Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Patent number: 11574485Abstract: Various implementations disclosed herein include devices, systems, and methods that obtain a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generate a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, classify the object based on the 3D bounding box and the 3D semantic data, and display a measurement of the object, where the measurement of the object is determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.Type: GrantFiled: January 14, 2021Date of Patent: February 7, 2023Assignee: Apple Inc.Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Publication number: 20220262025Abstract: Various implementations disclosed herein include devices, systems, and methods that determine a wrist measurement or watch band size using depth data captured by a depth sensor from one or more rotational orientations of the wrist. In some implementations, depth data captured by a depth sensor including at least two depth map images of a wrist from different angles is obtained. In some implementations, an output is generated based on inputting the depth data into a machine learning model, the output corresponding to circumference of the wrist or a watch band size of the wrist. Then, a watch band size recommendation is provided based on the output.Type: ApplicationFiled: February 14, 2022Publication date: August 18, 2022Inventors: Aditya Sankar, Qi Shan, Shreyas V. Joshi, David Guera Cobo, Fareeha lrfan, Bryan M. Perfetti
-
Publication number: 20210248811Abstract: The subject technology provides a framework for learning neural scene representations directly from images, without three-dimensional (3D) supervision, by a machine-learning model. In the disclosed systems and methods, 3D structure can be imposed by ensuring that the learned representation transforms like a real 3D scene. For example, a loss function can be provided which enforces equivariance of the scene representation with respect to 3D rotations. Because naive tensor rotations may not be used to define models that are equivariant with respect to 3D rotations, a new operation called an invertible shear rotation is disclosed, which has the desired equivariance property. In some implementations, the model can be used to generate a 3D representation, such as mesh, of an object from an image of the object.Type: ApplicationFiled: January 8, 2021Publication date: August 12, 2021Inventors: Qi SHAN, Joshua SUSSKIND, Aditya SANKAR, Robert Alex COLBURN, Emilien DUPONT, Miguel Angel BAUTISTA MARTIN
-
Publication number: 20210224516Abstract: Various implementations disclosed herein include devices, systems, and methods that obtain a three-dimensional (3D) representation of a physical environment that was generated based on depth data and light intensity image data, generate a 3D bounding box corresponding to an object in the physical environment based on the 3D representation, classify the object based on the 3D bounding box and the 3D semantic data, and display a measurement of the object, where the measurement of the object is determined using one of a plurality of class-specific neural networks selected based on the classifying of the object.Type: ApplicationFiled: January 14, 2021Publication date: July 22, 2021Inventors: Amit Jain, Aditya Sankar, Qi Shan, Alexandre Da Veiga, Shreyas V. Joshi
-
Patent number: 10409836Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.Type: GrantFiled: June 20, 2016Date of Patent: September 10, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Aditya Sankar, William Lawrence Portnoy
-
Patent number: 9888215Abstract: An indoor scene capture system is provided that, with a handheld device with a camera, collects videos of rooms, spatially indexes the frames of the videos, marks doorways between rooms, and collects videos of transitions from room to room via doorways. The indoor scene capture system may assign a direction to at least some of the frames based on the angle of rotation as determined by an inertial sensor (e.g., gyroscope) of the handheld device. The indoor scene capture system marks doorways within the frames of the videos. For each doorway between rooms, the indoor scene capture system collects a video of transitioning through the doorway as the camera moves from the point within a room through the doorway to a point within the adjoining room.Type: GrantFiled: April 25, 2014Date of Patent: February 6, 2018Assignee: University of WashingtonInventors: Aditya Sankar, Steven Maxwell Seitz
-
Publication number: 20160299959Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.Type: ApplicationFiled: June 20, 2016Publication date: October 13, 2016Applicant: Microsoft CorporationInventors: Aditya Sankar, William Lawrence Portnoy
-
Patent number: 9389681Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.Type: GrantFiled: December 19, 2011Date of Patent: July 12, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Aditya Sankar, William Lawrence Portnoy
-
Patent number: 9092437Abstract: “Experience Streams” (ESs) are used by a “rich interactive narrative” (RIN) data model as basic building blocks that are combined in a variety of ways to enable or construct a large number of RIN scenarios for presenting interactive narratives to the user. In general various ES types contain all the information required to define and populate a particular RIN, as well as the information (in the form of a series of navigable states) that charts an animated and interactive course through each RIN. In other words, combinations of various ES provide a scripted path through a RIN environment, as well as various UI controls and/or toolbars that enable user interaction with the interactive narrative provided by each RIN. Example ESs include, but are not limited, content browser experience streams, zoomable media experience streams, relationship graph experience streams, player-controls/toolbar experience streams, etc.Type: GrantFiled: January 18, 2011Date of Patent: July 28, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Joseph M. Joy, Narendranath Datha, Eric J. Stollnitz, Aditya Sankar, Vinay Krishnaswamy, Sujith Radhakrishnan Warrier, Kanchen Rajanna, Tanuja Abhay Joshi
-
Publication number: 20140320661Abstract: An indoor scene capture system is provided that, with a handheld device with a camera, collects videos of rooms, spatially indexes the frames of the videos, marks doorways between rooms, and collects videos of transitions from room to room via doorways. The indoor scene capture system may assign a direction to at least some of the frames based on the angle of rotation as determined by an inertial sensor (e.g., gyroscope) of the handheld device. The indoor scene capture system marks doorways within the frames of the videos. For each doorway between rooms, the indoor scene capture system collects a video of transitioning through the doorway as the camera moves from the point within a room through the doorway to a point within the adjoining room.Type: ApplicationFiled: April 25, 2014Publication date: October 30, 2014Applicant: University of Washington through its Center for CommercializationInventors: Aditya Sankar, Steven Maxwell Seitz
-
Publication number: 20130159350Abstract: The subject disclosure is directed towards a sensor fusion interface that enables interaction between one or more entities of a physical environment and a computerized device component. A plurality of sensor modules generate multiple sensor input data associated with one or more entities in an environment and store such data in a shared library in accordance with a uniform and common schema. The multiple sensor input data is refined until a certain level of accuracy is achieved. Using the sensor fusion interface, entity state data is extracted from the shared library and exposed to the computerized device component.Type: ApplicationFiled: December 19, 2011Publication date: June 20, 2013Applicant: MICROSOFT CORPORATIONInventors: Aditya Sankar, William Lawrence Portnoy
-
Patent number: 8046691Abstract: A multimedia system specifies a multimedia experience stream by a narrative definition that defines a narrative presentation having sub-narratives. Each sub-narrative may incorporate multiple streams of different types of media with each stream defining a “path” through content of that media type. The multimedia system directs the presentation of the sub-narratives by interfacing with presentation components for each media type through a custom interface component for that media type that implements a common application interface. When a user pauses a presentation, the user can manually navigate around the content of a stream from the current location at the time of the pause to another location. When the user resumes the presentation, the multimedia system automatically transitions from the navigated-to location to the current location at the time of the pause to resume the presentation from where it was paused.Type: GrantFiled: December 31, 2008Date of Patent: October 25, 2011Assignee: Microsoft CorporationInventors: Aditya Sankar, Archana Prasad, Narendranath D. Govindachetty, Joseph Joy
-
Publication number: 20110113334Abstract: “Experience Streams” (ESs) are used by a “rich interactive narrative” (RIN) data model as basic building blocks that are combined in a variety of ways to enable or construct a large number of RIN scenarios for presenting interactive narratives to the user. In general various ES types contain all the information required to define and populate a particular RIN, as well as the information (in the form of a series of navigable states) that charts an animated and interactive course through each RIN. In other words, combinations of various ES provide a scripted path through a RIN environment, as well as various UI controls and/or toolbars that enable user interaction with the interactive narrative provided by each RIN. Example ESs include, but are not limited, content browser experience streams, zoomable media experience streams, relationship graph experience streams, player-controls/toolbar experience streams, etc.Type: ApplicationFiled: January 18, 2011Publication date: May 12, 2011Applicant: MICROSOFT CORPORATIONInventors: Joseph M. Joy, Narendranath Datha, Eric J. Stollnitz, Aditya Sankar, Vinay Krishnaswamy, Sujith Radhakrishnan Warrier, Kanchen Rajanna, Tanuja Abhay Joshi
-
Publication number: 20100169776Abstract: A multimedia system specifies a multimedia experience stream by a narrative definition that defines a narrative presentation having sub-narratives. Each sub-narrative may incorporate multiple streams of different types of media with each stream defining a “path” through content of that media type. The multimedia system directs the presentation of the sub-narratives by interfacing with presentation components for each media type through a custom interface component for that media type that implements a common application interface. When a user pauses a presentation, the user can manually navigate around the content of a stream from the current location at the time of the pause to another location. When the user resumes the presentation, the multimedia system automatically transitions from the navigated-to location to the current location at the time of the pause to resume the presentation from where it was paused.Type: ApplicationFiled: December 31, 2008Publication date: July 1, 2010Applicant: Microsoft CorporationInventors: Aditya Sankar, Archana Prasad, Narendranath D. Govindachetty, Joseph Joy