Patents by Inventor Zsolt Mathe

Zsolt Mathe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230013680
    Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.
    Type: Application
    Filed: September 29, 2022
    Publication date: January 19, 2023
    Inventors: John James Robertson, Zsolt Mathe
  • Patent number: 11500536
    Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: November 15, 2022
    Assignee: Snap Inc.
    Inventors: John James Robertson, Zsolt Mathe
  • Publication number: 20210255763
    Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.
    Type: Application
    Filed: April 30, 2021
    Publication date: August 19, 2021
    Inventors: John James Robertson, Zsolt Mathe
  • Patent number: 10996846
    Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.
    Type: Grant
    Filed: September 13, 2019
    Date of Patent: May 4, 2021
    Assignee: Snap Inc.
    Inventors: John James Robertson, Zsolt Mathe
  • Publication number: 20200104039
    Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.
    Type: Application
    Filed: September 13, 2019
    Publication date: April 2, 2020
    Inventors: JOHN JAMES ROBERTSON, ZSOLT MATHE
  • Patent number: 10129523
    Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.
    Type: Grant
    Filed: June 22, 2016
    Date of Patent: November 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu
  • Publication number: 20170374341
    Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.
    Type: Application
    Filed: June 22, 2016
    Publication date: December 28, 2017
    Inventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu
  • Patent number: 9607213
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: March 16, 2015
    Date of Patent: March 28, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 9519970
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Grant
    Filed: October 9, 2015
    Date of Patent: December 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Publication number: 20160035095
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Application
    Filed: October 9, 2015
    Publication date: February 4, 2016
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Patent number: 9191570
    Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.
    Type: Grant
    Filed: August 5, 2013
    Date of Patent: November 17, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Patent number: 9144744
    Abstract: Example apparatus and methods concern an improved immersive experience for a video gamer that is provided by controlling a game based on the three dimensional location and orientation of a control and display device held by or otherwise associated with the gamer. The location is determined from data comprising a three dimensional position and an orientation of a portion of a player in a three dimensional space associated with a computerized game. The facing and rotation of the device is determined as a function of both the location of the device and the orientation of the device. The orientation may be determined by data from motion sensors in or on the device. Example apparatus and methods control the computerized game based, at least in part, on the position of the device, the facing of the device, and the rotation of the device.
    Type: Grant
    Filed: June 10, 2013
    Date of Patent: September 29, 2015
    Inventors: Eric Langlois, Ed Pinto, Marcelo Lopez Ruiz, Todd Manion, Zsolt Mathe
  • Publication number: 20150262001
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Application
    Filed: March 16, 2015
    Publication date: September 17, 2015
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 9007417
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: July 18, 2012
    Date of Patent: April 14, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 8988432
    Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be processed. For example, the image may be downsampled, a shadow, noise, and/or a missing potion in the image may be determined, pixels in the image that may be outside a range defined by a capture device associated with the image may be determined, a portion of the image associated with a floor may be detected. Additionally, a target in the image may be determined and scanned. A refined image may then be rendered based on the processed image. The refined image may then be processed to, for example, track a user.
    Type: Grant
    Filed: November 5, 2009
    Date of Patent: March 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Zsolt Mathe, Charles Claudius Marais, Craig Peeper, Joe Bertolami, Ryan Michael Geiss
  • Publication number: 20140364227
    Abstract: Example apparatus and methods concern an improved immersive experience for a video gamer that is provided by controlling a game based on the three dimensional location and orientation of a control and display device held by or otherwise associated with the gamer. The location is determined from data comprising a three dimensional position and an orientation of a portion of a player in a three dimensional space associated with a computerized game. The facing and rotation of the device is determined as a function of both the location of the device and the orientation of the device. The orientation may be determined by data from motion sensors in or on the device. Example apparatus and methods control the computerized game based, at least in part, on the position of the device, the facing of the device, and the rotation of the device.
    Type: Application
    Filed: June 10, 2013
    Publication date: December 11, 2014
    Inventors: Eric Langlois, Ed Pinto, Marcelo Lopez Ruiz, Todd Manion, Zsolt Mathe
  • Patent number: 8897493
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.
    Type: Grant
    Filed: January 4, 2013
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
  • Patent number: 8896721
    Abstract: A depth image of a scene may be observed or captured by a capture device. The depth image may include a human target and an environment. One or more pixels of the depth image may be analyzed to determine whether the pixels in the depth image are associated with the environment of the depth image. The one or more pixels associated with the environment may then be discarded to isolate the human target and the depth image with the isolated human target may be processed.
    Type: Grant
    Filed: January 11, 2013
    Date of Patent: November 25, 2014
    Assignee: Microsoft Corporation
    Inventors: Zsolt Mathe, Charles Claudius Marais
  • Patent number: 8872823
    Abstract: A method and system are disclosed for automatic instrumentation that modifies a video game's shaders at run-time to collect detailed statistics about texture fetches such as MIP usage. The tracking may be transparent to the game application and therefore not require modifications to the application. In an embodiment, the method may be implemented in a software development kit used to record and provide texture usage data and optionally generate a report.
    Type: Grant
    Filed: October 9, 2009
    Date of Patent: October 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Jason Matthew Gould, Michael Edward Pietraszak, Zsolt Mathe, J. Andrew Goossen, Casey Leon Meekhof
  • Patent number: 8866889
    Abstract: A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.
    Type: Grant
    Filed: November 3, 2010
    Date of Patent: October 21, 2014
    Assignee: Microsoft Corporation
    Inventors: Prafulla J. Masalkar, Szymon P. Stachniak, Tommer Leyvand, Zhengyou Zhang, Leonardo Del Castillo, Zsolt Mathe