Patents by Inventor Alex A. Kipman
Alex A. Kipman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11928856Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: May 5, 2022Date of Patent: March 12, 2024Assignee: Microsoft Technology Licensing, LLC.Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Publication number: 20240062528Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: October 30, 2023Publication date: February 22, 2024Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11842529Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: July 8, 2021Date of Patent: December 12, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
-
Publication number: 20220261516Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: May 5, 2022Publication date: August 18, 2022Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
-
Patent number: 11354459Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: September 21, 2018Date of Patent: June 7, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
-
Publication number: 20210334601Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: July 8, 2021Publication date: October 28, 2021Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Patent number: 11087176Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: GrantFiled: May 8, 2018Date of Patent: August 10, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
-
Patent number: 10573085Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.Type: GrantFiled: November 19, 2018Date of Patent: February 25, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
-
Patent number: 10486065Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.Type: GrantFiled: July 27, 2011Date of Patent: November 26, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
-
Publication number: 20190347369Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: May 8, 2018Publication date: November 14, 2019Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
-
Publication number: 20190347372Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.Type: ApplicationFiled: September 21, 2018Publication date: November 14, 2019Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
-
Publication number: 20190340317Abstract: Systems and methods are disclosed for using a synthetic world interface to model environments, sensors, and platforms, such as for computer vision sensor platform design. Digital models may be passed through a simulation service to generate synthetic experiment data. Systematic sweeps of parameters for various components of the sensor or platform design under test, under multiple environmental conditions, can facilitate time- and cost-efficient engineering efforts by revealing parameter sensitivities and environmental effects for multiple proposed configurations. Searches through the generated synthetic experimental data results can permit rapid identification of desirable design configuration candidates.Type: ApplicationFiled: May 7, 2018Publication date: November 7, 2019Inventors: Jonathan Chi Hang CHAN, Michael EBSTYNE, Alex A. KIPMAN, Pedro U. ESCOS, Andrew C. GORIS
-
Patent number: 10398972Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.Type: GrantFiled: September 16, 2016Date of Patent: September 3, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
-
Patent number: 10388076Abstract: An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present.Type: GrantFiled: January 31, 2018Date of Patent: August 20, 2019Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)Inventors: Avi Bar-Zeev, Bob Crocco, Alex Kipman, John Lewis
-
Publication number: 20190197784Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.Type: ApplicationFiled: November 19, 2018Publication date: June 27, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
-
Patent number: 10169922Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.Type: GrantFiled: October 21, 2016Date of Patent: January 1, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
-
Patent number: 10055888Abstract: A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum.Type: GrantFiled: April 28, 2015Date of Patent: August 21, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jonathan Christen, John Charles Howard, Marcus Tanner, Ben Sugden, Robert C. Memmott, Kenneth Charles Ouellette, Alex Kipman, Todd Alan Omotani, James T. Reichert, Jr.
-
Publication number: 20180211448Abstract: An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present.Type: ApplicationFiled: January 31, 2018Publication date: July 26, 2018Inventors: Avi BAR-ZEEV, Bob CROCCO, Alex KIPMAN, John LEWIS
-
Patent number: 9977492Abstract: Embodiments that relate to presenting a mixed reality environment via a mixed reality display device are disclosed. For example, one disclosed embodiment provides a method for presenting a mixed reality environment via a head-mounted display device. The method includes using head pose data to generally identify one or more gross selectable targets within a sub-region of a spatial region occupied by the mixed reality environment. The method further includes specifically identifying a fine selectable target from among the gross selectable targets based on eye-tracking data. Gesture data is then used to identify a gesture, and an operation associated with the identified gesture is performed on the fine selectable target.Type: GrantFiled: December 6, 2012Date of Patent: May 22, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Peter Tobias Kinnebrew, Alex Kipman
-
Patent number: 9943755Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.Type: GrantFiled: April 19, 2017Date of Patent: April 17, 2018Assignee: Microsoft Technology Licensing, LLCInventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett