Patents by Inventor Zsolt Mathe
Zsolt Mathe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
NEURAL NETWORK SYSTEM FOR GESTURE, WEAR, ACTIVITY, OR CARRY DETECTION ON A WEARABLE OR MOBILE DEVICE
Publication number: 20230013680Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.Type: ApplicationFiled: September 29, 2022Publication date: January 19, 2023Inventors: John James Robertson, Zsolt Mathe -
Neural network system for gesture, wear, activity, or carry detection on a wearable or mobile device
Patent number: 11500536Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.Type: GrantFiled: April 30, 2021Date of Patent: November 15, 2022Assignee: Snap Inc.Inventors: John James Robertson, Zsolt Mathe -
NEURAL NETWORK SYSTEM FOR GESTURE, WEAR, ACTIVITY, OR CARRY DETECTION ON A WEARABLE OR MOBILE DEVICE
Publication number: 20210255763Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.Type: ApplicationFiled: April 30, 2021Publication date: August 19, 2021Inventors: John James Robertson, Zsolt Mathe -
Neural network system for gesture, wear, activity, or carry detection on a wearable or mobile device
Patent number: 10996846Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.Type: GrantFiled: September 13, 2019Date of Patent: May 4, 2021Assignee: Snap Inc.Inventors: John James Robertson, Zsolt Mathe -
NEURAL NETWORK SYSTEM FOR GESTURE, WEAR, ACTIVITY, OR CARRY DETECTION ON A WEARABLE OR MOBILE DEVICE
Publication number: 20200104039Abstract: A neural network system includes an eyewear device. The eyewear device has a movement tracker, such as an accelerometer, gyroscope, or an inertial measurement unit for measuring acceleration and rotation. The neural network system tracks, via the movement tracker, movement of the eyewear device from at least one finger contact inputted from a user on an input surface. The neural network system identifies a finger gesture by detecting at least one detected touch event based on variation of the tracked movement of the eyewear device over a time period. The neural network system adjusts the image presented on the image display of the eyewear device based on the identified finger gesture. The neural network system can also detect whether the user is wearing the eyewear device and identify an activity of the user wearing the eyewear device based on the variation of the tracked movement over the time period.Type: ApplicationFiled: September 13, 2019Publication date: April 2, 2020Inventors: JOHN JAMES ROBERTSON, ZSOLT MATHE -
Patent number: 10129523Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.Type: GrantFiled: June 22, 2016Date of Patent: November 13, 2018Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu
-
Publication number: 20170374341Abstract: Examples are disclosed that relate to depth-aware late-stage reprojection. One example provides a computing system configured to receive and store image data, receive a depth map for the image data, processing the depth map to obtain a blurred depth map, and based upon motion data, determine a translation to be made to the image data. Further, for each pixel, the computing system is configured to translate an original ray extending from an original virtual camera location to an original frame buffer location to a reprojected ray extending from a translated camera location to a reprojected frame buffer location, determine a location at which the reprojected ray intersects the blurred depth map, and sample a color of a pixel for display based upon a color corresponding to the location at which the reprojected ray intersects the blurred depth map.Type: ApplicationFiled: June 22, 2016Publication date: December 28, 2017Inventors: Ashraf Ayman Michail, Georg Klein, Andrew Martin Pearson, Zsolt Mathe, Mark S. Grossman, Ning Xu
-
Patent number: 9607213Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.Type: GrantFiled: March 16, 2015Date of Patent: March 28, 2017Assignee: Microsoft Technology Licensing, LLCInventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
-
Patent number: 9519970Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.Type: GrantFiled: October 9, 2015Date of Patent: December 13, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Zsolt Mathe, Charles Claudius Marais
-
Publication number: 20160035095Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.Type: ApplicationFiled: October 9, 2015Publication date: February 4, 2016Inventors: Zsolt Mathe, Charles Claudius Marais
-
Patent number: 9191570Abstract: A depth image of a scene may be received, observed, or captured by a device. A human target in the depth image may then be scanned for one or more body parts such as shoulders, hips, knees, or the like. A tilt angle may then be calculated based on the body parts. For example, a first portion of pixels associated with an upper body part such as the shoulders and a second portion of pixels associated with a lower body part such as a midpoint between the hips and knees may be selected. The tilt angle may then be calculated using the first and second portions of pixels.Type: GrantFiled: August 5, 2013Date of Patent: November 17, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Zsolt Mathe, Charles Claudius Marais
-
Patent number: 9144744Abstract: Example apparatus and methods concern an improved immersive experience for a video gamer that is provided by controlling a game based on the three dimensional location and orientation of a control and display device held by or otherwise associated with the gamer. The location is determined from data comprising a three dimensional position and an orientation of a portion of a player in a three dimensional space associated with a computerized game. The facing and rotation of the device is determined as a function of both the location of the device and the orientation of the device. The orientation may be determined by data from motion sensors in or on the device. Example apparatus and methods control the computerized game based, at least in part, on the position of the device, the facing of the device, and the rotation of the device.Type: GrantFiled: June 10, 2013Date of Patent: September 29, 2015Inventors: Eric Langlois, Ed Pinto, Marcelo Lopez Ruiz, Todd Manion, Zsolt Mathe
-
Publication number: 20150262001Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.Type: ApplicationFiled: March 16, 2015Publication date: September 17, 2015Inventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
-
Patent number: 9007417Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.Type: GrantFiled: July 18, 2012Date of Patent: April 14, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
-
Patent number: 8988432Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be processed. For example, the image may be downsampled, a shadow, noise, and/or a missing potion in the image may be determined, pixels in the image that may be outside a range defined by a capture device associated with the image may be determined, a portion of the image associated with a floor may be detected. Additionally, a target in the image may be determined and scanned. A refined image may then be rendered based on the processed image. The refined image may then be processed to, for example, track a user.Type: GrantFiled: November 5, 2009Date of Patent: March 24, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Zsolt Mathe, Charles Claudius Marais, Craig Peeper, Joe Bertolami, Ryan Michael Geiss
-
Publication number: 20140364227Abstract: Example apparatus and methods concern an improved immersive experience for a video gamer that is provided by controlling a game based on the three dimensional location and orientation of a control and display device held by or otherwise associated with the gamer. The location is determined from data comprising a three dimensional position and an orientation of a portion of a player in a three dimensional space associated with a computerized game. The facing and rotation of the device is determined as a function of both the location of the device and the orientation of the device. The orientation may be determined by data from motion sensors in or on the device. Example apparatus and methods control the computerized game based, at least in part, on the position of the device, the facing of the device, and the rotation of the device.Type: ApplicationFiled: June 10, 2013Publication date: December 11, 2014Inventors: Eric Langlois, Ed Pinto, Marcelo Lopez Ruiz, Todd Manion, Zsolt Mathe
-
Patent number: 8897493Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may then be analyzed to determine whether the depth image includes a human target. For example, the depth image may include one or more targets including a human target and non-human targets. Each of the targets may be flood filled and compared to a pattern to determine whether the target may be a human target. If one or more of the targets in the depth image includes a human target, the human target may be scanned. A skeletal model of the human target may then be generated based on the scan.Type: GrantFiled: January 4, 2013Date of Patent: November 25, 2014Assignee: Microsoft CorporationInventors: Zsolt Mathe, Charles Claudius Marais, Ryan Michael Geiss
-
Patent number: 8896721Abstract: A depth image of a scene may be observed or captured by a capture device. The depth image may include a human target and an environment. One or more pixels of the depth image may be analyzed to determine whether the pixels in the depth image are associated with the environment of the depth image. The one or more pixels associated with the environment may then be discarded to isolate the human target and the depth image with the isolated human target may be processed.Type: GrantFiled: January 11, 2013Date of Patent: November 25, 2014Assignee: Microsoft CorporationInventors: Zsolt Mathe, Charles Claudius Marais
-
Patent number: 8872823Abstract: A method and system are disclosed for automatic instrumentation that modifies a video game's shaders at run-time to collect detailed statistics about texture fetches such as MIP usage. The tracking may be transparent to the game application and therefore not require modifications to the application. In an embodiment, the method may be implemented in a software development kit used to record and provide texture usage data and optionally generate a report.Type: GrantFiled: October 9, 2009Date of Patent: October 28, 2014Assignee: Microsoft CorporationInventors: Jason Matthew Gould, Michael Edward Pietraszak, Zsolt Mathe, J. Andrew Goossen, Casey Leon Meekhof
-
Patent number: 8866889Abstract: A system and method are disclosed for calibrating a depth camera in a natural user interface. The system in general obtains an objective measurement of true distance between a capture device and one or more objects in a scene. The system then compares the true depth measurement to the depth measurement provided by the depth camera at one or more points and determines an error function describing an error in the depth camera measurement. The depth camera may then be recalibrated to correct for the error. The objective measurement of distance to one or more objects in a scene may be accomplished by a variety of systems and methods.Type: GrantFiled: November 3, 2010Date of Patent: October 21, 2014Assignee: Microsoft CorporationInventors: Prafulla J. Masalkar, Szymon P. Stachniak, Tommer Leyvand, Zhengyou Zhang, Leonardo Del Castillo, Zsolt Mathe