Patents by Inventor Mark J. Finocchio

Mark J. Finocchio has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10671841
    Abstract: Attribute state classification techniques are described. In one or more implementations, one or more pixels of an image are classified by a computing device as having one or several states for one or more attributes that do not identify corresponding body parts of a user. A gesture is recognized by the computing device that is operable to initiate one or more operations of the computing device based at least in part of the state classifications of the one or more pixels of one or more attributes.
    Type: Grant
    Filed: May 2, 2011
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandru O. Balan, Richard E. Moore, Mark J. Finocchio
  • Patent number: 9943755
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Grant
    Filed: April 19, 2017
    Date of Patent: April 17, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett
  • Publication number: 20170216718
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Application
    Filed: April 19, 2017
    Publication date: August 3, 2017
    Inventors: R. STEPHEN POLZIN, ALEX A. KIPMAN, MARK J. FINOCCHIO, RYAN MICHAEL GEISS, KATHRYN STONE PEREZ, KUDO TSUNODA, DARREN ALEXANDER BENNETT
  • Patent number: 9656162
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Grant
    Filed: April 14, 2014
    Date of Patent: May 23, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett
  • Patent number: 9529513
    Abstract: Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user, and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: December 27, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Alexandru Balan, Mark J. Finocchio, Kyungsuk David Lee
  • Patent number: 9424490
    Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.
    Type: Grant
    Filed: June 27, 2014
    Date of Patent: August 23, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
  • Patent number: 9378590
    Abstract: An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria.
    Type: Grant
    Filed: April 23, 2013
    Date of Patent: June 28, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kyungsuk David Lee, Alexandru Balan, Mark J. Finocchio
  • Publication number: 20160132786
    Abstract: Various embodiments relating to partitioning a data set for training machine-learning classifiers based on an output of a globally trained machine-learning classifier are disclosed. In one embodiment, a first machine-learning classifier may be trained on a set of training data to produce a corresponding set of output data. The set of training data may be partitioned into a plurality of subsets based on the set of output data. Each subset may correspond to a different class. A second machine-learning classifier may be trained on the set of training data using a plurality of classes corresponding to the plurality of subsets to produce, for each data object of the set of training data, a probability distribution having for each class a probability that the data object is a member of the class.
    Type: Application
    Filed: November 12, 2014
    Publication date: May 12, 2016
    Inventors: Alexandru Balan, Bradford Jason Snow, Christopher Douglas Edmonds, Henry Nelson Jerez, Kyungsuk David Lee, Mark J. Finocchio, Miguel Susffalich, Cem Keskin
  • Publication number: 20150379376
    Abstract: Embodiments are disclosed that relate to processing image pixels. For example, one disclosed embodiment provides a system for classifying pixels comprising retrieval logic; a pixel storage allocation including a plurality of pixel slots, each pixel slot being associated individually with a pixel, where the retrieval logic is configured to cause the pixels to be allocated into the pixel slots in an input sequence; pipelined processing logic configured to output, for each of the pixels, classification information associated with the pixel; and scheduling logic configured to control dispatches from the pixel slots to the pipelined processing logic, where the scheduling logic and pipelined processing logic are configured to act in concert to generate the classification information for the pixels in an output sequence that differs from and is independent of the input sequence.
    Type: Application
    Filed: June 27, 2014
    Publication date: December 31, 2015
    Inventors: Adam James Muff, John Allen Tardif, Susan Carrie, Mark J. Finocchio, Kyungsuk David Lee, Christopher Douglas Edmonds, Randy Crane
  • Patent number: 9215478
    Abstract: A media feed interface may be provided that may be used to extract a media frame from a media feed. The media feed interface may access a capture device, a file, and/or a network resource. Upon accessing the capture device, file, and/or network resource, the media feed interface may populate buffers with data and then may create a media feed from the buffers. Upon request, the media feed interface may isolate a media frame within the media feed. For example, the media feed interface analyze media frames in the media feed to determine whether a media frame includes information associated with, for example, the request. If the media frame includes the requested information, the media feed interface may isolate the media frame associated with the information and may provide access to the isolated media frame.
    Type: Grant
    Filed: November 27, 2013
    Date of Patent: December 15, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Mark J. Finocchio, Jeffrey Margolis
  • Patent number: 9183676
    Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: November 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
  • Patent number: 9182814
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may include a human target that may have, for example, a portion thereof non-visible or occluded. For example, a user may be turned such that a body part may not be visible to the device, may have one or more body parts partially outside a field of view of the device, may have a body part or a portion of a body part behind another body part or object, or the like such that the human target associated with the user may also have a portion body part or a body part non-visible or occluded in the depth image. A position or location of the non-visible or occluded portion or body part of the human target associated with the user may then be estimated.
    Type: Grant
    Filed: June 26, 2009
    Date of Patent: November 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Alex A. Kipman, Kathryn Stone Perez, Mark J. Finocchio, Ryan Michael Geiss, Kudo Tsunoda
  • Patent number: 9153035
    Abstract: Techniques for efficiently tracking points on a depth map using an optical flow are disclosed. In order to optimize the use of optical flow, isolated regions of the depth map may be tracked. The sampling regions may comprise a 3-dimensional box (width, height and depth). Each region may be “colored” as a function of depth information to generate a “zebra” pattern as a function of depth data for each sample. The disclosed techniques may provide for handling optical flow tracking when occlusion occurs by utilizing a weighting process for application of optical flow vs. velocity prediction to stabilize tracking.
    Type: Grant
    Filed: October 2, 2014
    Date of Patent: October 6, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventor: Mark J. Finocchio
  • Patent number: 9122053
    Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.
    Type: Grant
    Filed: April 10, 2012
    Date of Patent: September 1, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, Jr.
  • Publication number: 20150084967
    Abstract: Techniques for efficiently tracking points on a depth map using an optical flow are disclosed. In order to optimize the use of optical flow, isolated regions of the depth map may be tracked. The sampling regions may comprise a 3-dimensional box (width, height and depth). Each region may be “colored” as a function of depth information to generate a “zebra” pattern as a function of depth data for each sample. The disclosed techniques may provide for handling optical flow tracking when occlusion occurs by utilizing a weighting process for application of optical flow vs. velocity prediction to stabilize tracking.
    Type: Application
    Filed: October 2, 2014
    Publication date: March 26, 2015
    Inventor: Mark J. Finocchio
  • Patent number: 8988345
    Abstract: A system and related methods for adaptive event recognition are provided. In one example, a selected sensor of a head-mounted display device is operated at a first polling rate corresponding to a higher potential latency. Initial user-related information is received. Where the initial user-related information matches a pre-event, the selected sensor is operated at a second polling rate faster than the first polling rate and corresponding to a lower potential latency. Subsequent user-related information is received. Where the subsequent user-related information matches a selected target event, feedback associated with the selected target event is provided to the user via the head-mounted display device.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Nathan Ackerman, Mark J. Finocchio, Andrew Bert Hodge
  • Publication number: 20150040040
    Abstract: Two-handed interactions with a natural user interface are disclosed. For example, one embodiment provides a method comprising detecting via image data received by the computing device a context-setting input performed by a first hand of a user. and sending to a display a user interface positioned based on a virtual interaction coordinate system, the virtual coordinate system being positioned based upon a position of the first hand of the user. The method further includes detecting via image data received by the computing device an action input performed by a second hand of the user, the action input performed while the first hand of the user is performing the context-setting input, and sending to the display a response based on the context-setting input and an interaction between the action input and the virtual interaction coordinate system.
    Type: Application
    Filed: August 5, 2013
    Publication date: February 5, 2015
    Inventors: Alexandru Balan, Mark J. Finocchio, Kyungsuk David Lee
  • Publication number: 20140375545
    Abstract: A system and related methods for adaptive event recognition are provided. In one example, a selected sensor of a head-mounted display device is operated at a first polling rate corresponding to a higher potential latency. Initial user-related information is received. Where the initial user-related information matches a pre-event, the selected sensor is operated at a second polling rate faster than the first polling rate and corresponding to a lower potential latency. Subsequent user-related information is received. Where the subsequent user-related information matches a selected target event, feedback associated with the selected target event is provided to the user via the head-mounted display device.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: Nathan Ackerman, Mark J. Finocchio, Andrew Bert Hodge
  • Patent number: 8878906
    Abstract: Technology is described for determining and using invariant features for computer vision. A local orientation may be determined for each depth pixel in a subset of the depth pixels in a depth map. The local orientation may an in-plane orientation, an out-out-plane orientation or both. A local coordinate system is determined for each of the depth pixels in the subset based on the local orientation of the corresponding depth pixel. A feature region is defined relative to the local coordinate system for each of the depth pixels in the subset. The feature region for each of the depth pixels in the subset is transformed from the local coordinate system to an image coordinate system of the depth map. The transformed feature regions are used to process the depth map.
    Type: Grant
    Filed: November 28, 2012
    Date of Patent: November 4, 2014
    Assignee: Microsoft Corporation
    Inventors: Jamie D. J. Shotton, Mark J. Finocchio, Richard E. Moore, Alexandru O. Balan, Kyungsuk David Lee
  • Publication number: 20140313225
    Abstract: An augmented reality submission includes a hologram to virtually augment a world space object and a compensation offer for presenting the hologram to a viewer of the world space object. The augmented reality submission is selected as a winning submission if the submission satisfies a selection criteria.
    Type: Application
    Filed: April 23, 2013
    Publication date: October 23, 2014
    Inventors: Kyungsuk David Lee, Alexandru Balan, Mark J. Finocchio