Patents by Inventor Alex A. Kipman

Alex A. Kipman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11928856
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Grant
    Filed: May 5, 2022
    Date of Patent: March 12, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
  • Publication number: 20240062528
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
  • Patent number: 11842529
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: December 12, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
  • Publication number: 20220261516
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Application
    Filed: May 5, 2022
    Publication date: August 18, 2022
    Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
  • Patent number: 11354459
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Grant
    Filed: September 21, 2018
    Date of Patent: June 7, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Ebstyne, Pedro Urbina Escos, Yuri Pekelny, Jonathan Chi Hang Chan, Emanuel Shalev, Alex Kipman, Mark Flick
  • Publication number: 20210334601
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Application
    Filed: July 8, 2021
    Publication date: October 28, 2021
    Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
  • Patent number: 11087176
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Grant
    Filed: May 8, 2018
    Date of Patent: August 10, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Michael Ebstyne, Pedro Urbina Escos, Emanuel Shalev, Alex Kipman, Yuri Pekelny, Jonathan Chi Hang Chan
  • Patent number: 10573085
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Grant
    Filed: November 19, 2018
    Date of Patent: February 25, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Patent number: 10486065
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Grant
    Filed: July 27, 2011
    Date of Patent: November 26, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20190347369
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of localization algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has localization capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and localization algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Application
    Filed: May 8, 2018
    Publication date: November 14, 2019
    Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Emanuel SHALEV, Alex KIPMAN, Yuri PEKELNY, Jonathan Chi Hang CHAN
  • Publication number: 20190347372
    Abstract: A synthetic world interface may be used to model digital environments, sensors, and motions for the evaluation, development, and improvement of computer vision and speech algorithms. A synthetic data cloud service with a library of sensor primitives, motion generators, and environments with procedural and game-like capabilities, facilitates engineering design for a manufactural solution that has computer vision and speech capabilities. In some embodiments, a sensor platform simulator operates with a motion orchestrator, an environment orchestrator, an experiment generator, and an experiment runner to test various candidate hardware configurations and computer vision and speech algorithms in a virtual environment, advantageously speeding development and reducing cost. Thus, examples disclosed herein may relate to virtual reality (VR) or mixed reality (MR) implementations.
    Type: Application
    Filed: September 21, 2018
    Publication date: November 14, 2019
    Inventors: Michael EBSTYNE, Pedro Urbina ESCOS, Yuri PEKELNY, Jonathan Chi Hang CHAN, Emanuel SHALEV, Alex KIPMAN, Mark FLICK
  • Publication number: 20190340317
    Abstract: Systems and methods are disclosed for using a synthetic world interface to model environments, sensors, and platforms, such as for computer vision sensor platform design. Digital models may be passed through a simulation service to generate synthetic experiment data. Systematic sweeps of parameters for various components of the sensor or platform design under test, under multiple environmental conditions, can facilitate time- and cost-efficient engineering efforts by revealing parameter sensitivities and environmental effects for multiple proposed configurations. Searches through the generated synthetic experimental data results can permit rapid identification of desirable design configuration candidates.
    Type: Application
    Filed: May 7, 2018
    Publication date: November 7, 2019
    Inventors: Jonathan Chi Hang CHAN, Michael EBSTYNE, Alex A. KIPMAN, Pedro U. ESCOS, Andrew C. GORIS
  • Patent number: 10398972
    Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.
    Type: Grant
    Filed: September 16, 2016
    Date of Patent: September 3, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar E. Murillo, Andy D. Wilson, Alex A. Kipman, Janet E. Galore
  • Patent number: 10388076
    Abstract: An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present.
    Type: Grant
    Filed: January 31, 2018
    Date of Patent: August 20, 2019
    Assignee: TELEFONAKTIEBOLAGET LM ERICSSON (PUBL)
    Inventors: Avi Bar-Zeev, Bob Crocco, Alex Kipman, John Lewis
  • Publication number: 20190197784
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Application
    Filed: November 19, 2018
    Publication date: June 27, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Patent number: 10169922
    Abstract: A mixed-reality display device comprises an input system, a display, and a graphics processor. The input system is configured to receive a parameter value, the parameter value being one of a plurality of values of a predetermined range receivable by the input system. The display is configured to display virtual image content that adds an augmentation to a real-world environment viewed by a user of the mixed-reality display device. The graphics processor is coupled operatively to the input system and to the display; it is configured to render the virtual image content so as to variably change the augmentation, to variably change a perceived realism of the real world environment in correlation to the parameter value.
    Type: Grant
    Filed: October 21, 2016
    Date of Patent: January 1, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alex Kipman, Purnima M. Rao, Rebecca Haruyama, Shih-Sang Carnaven Chiu, Stuart Mayhew, Oscar E. Murillo, Carlos Fernando Faria Costa
  • Patent number: 10055888
    Abstract: A computing system and method for producing and consuming metadata within multi-dimensional data is provided. The computing system comprising a see-through display, a sensor system, and a processor configured to: in a recording phase, generate an annotation at a location in a three dimensional environment, receive, via the sensor system, a stream of telemetry data recording movement of a first user in the three dimensional environment, receive a message to be recorded from the first user, and store, in memory as annotation data for the annotation, the stream of telemetry data and the message, and in a playback phase, display a visual indicator of the annotation at the location, receive a selection of the visual indicator by a second user, display a simulacrum superimposed onto the three dimensional environment and animated according to the telemetry data, and present the message via the animated simulacrum.
    Type: Grant
    Filed: April 28, 2015
    Date of Patent: August 21, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jonathan Christen, John Charles Howard, Marcus Tanner, Ben Sugden, Robert C. Memmott, Kenneth Charles Ouellette, Alex Kipman, Todd Alan Omotani, James T. Reichert, Jr.
  • Publication number: 20180211448
    Abstract: An optical see-through head-mounted display device includes a see-through lens which combines an augmented reality image with light from a real-world scene, while an opacity filter is used to selectively block portions of the real-world scene so that the augmented reality image appears more distinctly. The opacity filter can be a see-through LCD panel, for instance, where each pixel of the LCD panel can be selectively controlled to be transmissive or opaque, based on a size, shape and position of the augmented reality image. Eye tracking can be used to adjust the position of the augmented reality image and the opaque pixels. Peripheral regions of the opacity filter, which are not behind the augmented reality image, can be activated to provide a peripheral cue or a representation of the augmented reality image. In another aspect, opaque pixels are provided at a time when an augmented reality image is not present.
    Type: Application
    Filed: January 31, 2018
    Publication date: July 26, 2018
    Inventors: Avi BAR-ZEEV, Bob CROCCO, Alex KIPMAN, John LEWIS
  • Patent number: 9977492
    Abstract: Embodiments that relate to presenting a mixed reality environment via a mixed reality display device are disclosed. For example, one disclosed embodiment provides a method for presenting a mixed reality environment via a head-mounted display device. The method includes using head pose data to generally identify one or more gross selectable targets within a sub-region of a spatial region occupied by the mixed reality environment. The method further includes specifically identifying a fine selectable target from among the gross selectable targets based on eye-tracking data. Gesture data is then used to identify a gesture, and an operation associated with the identified gesture is performed on the fine selectable target.
    Type: Grant
    Filed: December 6, 2012
    Date of Patent: May 22, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Peter Tobias Kinnebrew, Alex Kipman
  • Patent number: 9943755
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: April 17, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett