Patents by Inventor Daniel Kurz
Daniel Kurz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11636578Abstract: Various implementations disclosed herein include devices, systems, and methods that complete content for a missing part of an image of an environment. For example, an example process may include obtaining an image including defined content and missing parts for which content is undefined, determining a spatial image transformation for the image based on the defined content and the missing parts of the image, altering the image by applying the spatial image transformation, and completing the altered image.Type: GrantFiled: April 23, 2021Date of Patent: April 25, 2023Assignee: Apple Inc.Inventors: Daniel Kurz, Gowri Somanath, Tobias Holl
-
Patent number: 11544865Abstract: Various implementations disclosed herein include devices, systems, and methods for detecting and correcting the posture of users of electronic devices. In some implementations, an image capture device or other sensor is used to estimate or otherwise determine a posture of a user. As a specific example, a head mounted device (HMD) may include a camera that captures an image of the user wearing the device and the image may be analyzed to identify 3D joint locations representing the current posture of the user relative to the HMD. The user's posture is analyzed to assess whether a posture correction or change is desirable, for example, by classifying the posture as good or bad or by scoring the posture on a numerical scale. If a posture correction or change is desirable, appropriate feedback to encourage the user to adopt the posture correction or otherwise change his or her posture is identified and provided.Type: GrantFiled: January 2, 2020Date of Patent: January 3, 2023Assignee: Apple Inc.Inventor: Daniel Kurz
-
Publication number: 20220391608Abstract: Various implementations disclosed herein include devices, systems, and methods that use a marking on a transparent surface (e.g., a prescription lens insert for an HMD) to identify information (e.g., prescription parameters) about the transparent surface. In some implementations, the markings do not interfere with eye tracking through the transparent surface or using the transparent surface to view virtual content or a physical environment. In some implementations, image data is obtained from an image sensor of an electronic device, the image data corresponding to a transparent surface attached to the electronic device. Then, a code is identified in the image data, wherein the code is detectable on the transparent surface by the image sensor without interfering with a function of the electronic device involving the transparent surface. In some implementations, content is provided at the electronic device based on the identified code, wherein the content is viewable through the transparent surface.Type: ApplicationFiled: May 25, 2022Publication date: December 8, 2022Inventors: Daniel Kurz, Anselm Grundhoefer, Tushar Gupta
-
Patent number: 11507836Abstract: Various implementations disclosed herein include devices, systems, and methods that involve federated learning techniques that utilize locally-determined ground truth data that may be used in addition to, or in the alternative to, user-provided ground truth data. Some implementations provide an improved federated learning technique that creates ground truth data on the user device using a second prediction technique that differs from a first prediction technique/model that is being trained. The second prediction technique may be better but may be less suited for real time, general use than the first prediction technique.Type: GrantFiled: December 15, 2020Date of Patent: November 22, 2022Assignee: Apple Inc.Inventors: Daniel Kurz, Muhammad Ahmed Riaz
-
Patent number: 11488352Abstract: Various implementations disclosed herein include devices, systems, and methods for modeling a geographical space for a computer-generated reality (CGR) experience. In some implementations, a method is performed by a device including a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, the method includes obtaining a set of images. In some implementations, the method includes providing the set of images to an image classifier that determines whether the set of images correspond to a geographical space. In some implementations, the method includes establishing correspondences between at least a subset of the set of images in response to the image classifier determining that the subset of images correspond to the geographical space. In some implementations, the method includes synthesizing a model of the geographical space based on the correspondences between the subset of images.Type: GrantFiled: January 20, 2020Date of Patent: November 1, 2022Assignee: APPLE INC.Inventor: Daniel Kurz
-
Patent number: 11474348Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.Type: GrantFiled: September 21, 2021Date of Patent: October 18, 2022Assignee: APPLE INC.Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia
-
Patent number: 11462000Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.Type: GrantFiled: August 25, 2020Date of Patent: October 4, 2022Assignee: Apple Inc.Inventors: Peter Meier, Daniel Kurz, Brian Chris Clark, Mohamed Selim Ben Himane
-
Patent number: 11423308Abstract: Implementations disclosed herein provide systems and methods that use classification-based machine learning to generate perceptually-plausible content for a missing part (e.g., some or all) of an image. The machine learning model may be trained to generate content for the missing part that appears plausible by learning to generate content that cannot be distinguished from real image content, for example, using adversarial loss-based training. To generate the content, a probabilistic classifier may be used to select color attribute values (e.g., RGB values) for each pixel of the missing part of the image. To do so, a pixel color attribute is segmented into a number of bins (e.g., value ranges) that are used as classes. The classifier determines probabilities for each of the bins of a color attribute for each pixel and generates the content by selecting the bin having the highest probability for each color attribute for each pixel.Type: GrantFiled: September 15, 2020Date of Patent: August 23, 2022Assignee: Apple Inc.Inventors: Gowri Somanath, Daniel Kurz
-
Publication number: 20220222904Abstract: A VR system for vehicles that may implement methods that address problems with vehicles in motion that may result in motion sickness for passengers. The VR system may provide virtual views that match visual cues with the physical motions that a passenger experiences. The VR system may provide immersive VR experiences by replacing the view of the real world with virtual environments. Active vehicle systems and/or vehicle control systems may be integrated with the VR system to provide physical effects with the virtual experiences. The virtual environments may be altered to accommodate a passenger upon determining that the passenger is prone to or is exhibiting signs of motion sickness.Type: ApplicationFiled: March 30, 2022Publication date: July 14, 2022Applicant: Apple Inc.Inventors: Mark B. Rober, Sawyer I. Cohen, Daniel Kurz, Tobias Holl, Benjamin B. Lyon, Peter Georg Meier, Jeffrey M. Riepling, Holly Gerhard
-
Patent number: 11379996Abstract: Various implementations disclosed herein include devices, systems, and methods that use event camera data to track deformable objects such as faces, hands, and other body parts. One exemplary implementation involves receiving a stream of pixel events output by an event camera. The device tracks the deformable object using this data. Various implementations do so by generating a dynamic representation of the object and modifying the dynamic representation of the object in response to obtaining additional pixel events output by the event camera. In some implementations, generating the dynamic representation of the object involves identifying features disposed on the deformable surface of the object using the stream of pixel events. The features are determined by identifying patterns of pixel events. As new event stream data is received, the patterns of pixel events are recognized in the new data and used to modify the dynamic representation of the object.Type: GrantFiled: November 13, 2018Date of Patent: July 5, 2022Assignee: Apple Inc.Inventors: Peter Kaufmann, Daniel Kurz, Brian Amberg, Yanghai Tsin
-
Publication number: 20220191407Abstract: A method of generating at least one image of a real environment comprises providing at least one environment property related to at least part of the real environment, providing at least one virtual object property related to a virtual object, determining at least one imaging parameter according to the at least one provided virtual object property and the at least one provided environment property, and generating at least one image of the real environment representing information about light leaving the real environment according to the determined at least one imaging parameter, wherein the light leaving the real environment is measured by at least one camera.Type: ApplicationFiled: December 23, 2021Publication date: June 16, 2022Inventors: Sebastian Knorr, Daniel Kurz
-
Patent number: 11334156Abstract: Triggering a state change includes displaying a first version of a series of frames based on a first setup configuration, obtaining a second setup configuration for the series of frames, in response to obtaining a second setup configuration, monitoring for a change in an eye status, and in response to detecting a change in the eye status, displaying a second version of the series of frames based on the second setup configuration.Type: GrantFiled: May 24, 2021Date of Patent: May 17, 2022Assignee: Apple Inc.Inventors: Sebastian Knorr, Daniel Kurz
-
Patent number: 11321923Abstract: A VR system for vehicles that may implement methods that address problems with vehicles in motion that may result in motion sickness for passengers. The VR system may provide virtual views that match visual cues with the physical motions that a passenger experiences. The VR system may provide immersive VR experiences by replacing the view of the real world with virtual environments. Active vehicle systems and/or vehicle control systems may be integrated with the VR system to provide physical effects with the virtual experiences. The virtual environments may be altered to accommodate a passenger upon determining that the passenger is prone to or is exhibiting signs of motion sickness.Type: GrantFiled: April 29, 2020Date of Patent: May 3, 2022Assignee: Apple Inc.Inventors: Mark B. Rober, Sawyer I. Cohen, Daniel Kurz, Tobias Holl, Benjamin B. Lyon, Peter Georg Meier, Jeffrey M. Riepling, Holly Gerhard
-
Patent number: 11308652Abstract: Various implementations disclosed herein render virtual content with noise that is similar to or that otherwise better matches the noise found in the images with which the virtual content is combined. Some implementations involve identifying noise data for an image, creating a parameterized noise model based on the noise data, generating a noise pattern approximating noise of the image or another image using the parameterized noise model, and rendering content that includes the image and virtual content with noise added based on the noise pattern.Type: GrantFiled: January 13, 2020Date of Patent: April 19, 2022Assignee: Apple Inc.Inventors: Daniel Kurz, Tobias Holl
-
Patent number: 11282171Abstract: In some implementations, a method includes obtaining a computer graphic generated based on one or more visual elements within a first video frame. In some implementations, the first video frame is associated with a first time. In some implementations, the method includes obtaining a second video frame associated with a second time. In some implementations, the second time is different from the first time. In some implementations, the method includes applying an intensity transformation to the computer graphic in order to generate a transformed computer graphic. In some implementations, the intensity transformation is based on an intensity difference between the first video frame and the second video frame. In some implementations, the method includes rendering the transformed computer graphic based on one or more visual elements within the second video frame.Type: GrantFiled: September 25, 2019Date of Patent: March 22, 2022Assignee: APPLE INC.Inventor: Daniel Kurz
-
Publication number: 20220003994Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints. The method includes determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.Type: ApplicationFiled: September 21, 2021Publication date: January 6, 2022Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia
-
Publication number: 20210407160Abstract: The invention is related to a method of presenting a digital information related to a real object, comprising determining a real object, providing a plurality of presentation modes, wherein the plurality of presentation modes comprises an augmented reality mode, and at least one of a virtual reality mode and an audio mode, providing at least one representation of a digital information related to the real object, determining a spatial relationship between a camera and a reference coordinate system under consideration of an image captured by the camera, selecting a presentation mode from the plurality of presentation modes according to the spatial relationship, and presenting the at least one representation of the digital information using the selected presentation mode.Type: ApplicationFiled: May 18, 2021Publication date: December 30, 2021Inventors: Daniel Kurz, Anton Fedosov
-
Patent number: 11212464Abstract: A method of generating at least one image of a real environment comprises providing at least one environment property related to at least part of the real environment, providing at least one virtual object property related to a virtual object, determining at least one imaging parameter according to the at least one provided virtual object property and the at least one provided environment property, and generating at least one image of the real environment representing information about light leaving the real environment according to the determined at least one imaging parameter, wherein the light leaving the real environment is measured by at least one camera.Type: GrantFiled: April 1, 2019Date of Patent: December 28, 2021Assignee: Apple Inc.Inventors: Sebastian Knorr, Daniel Kurz
-
Patent number: 11182978Abstract: Various implementations disclosed herein render virtual content while accounting for air-born particles or lens-based artifacts to improve coherence or to otherwise better match the appearance of real content in the images with which the virtual content is combined.Type: GrantFiled: April 17, 2020Date of Patent: November 23, 2021Assignee: Apple Inc.Inventor: Daniel Kurz
-
Patent number: 11150469Abstract: In one implementation, a method includes emitting light with modulating intensity from a plurality of light sources towards an eye of a user. The method includes receiving light intensity data indicative of an intensity of the plurality of glints reflected by the eye of the user in the form of a plurality of glints and determining an eye tracking characteristic of the user based on the light intensity data. In one implementation, a method includes generating, using an event camera comprising a plurality of light sensors at a plurality of respective locations, a plurality of event messages, each of the plurality of event messages being generated in response to a particular light sensor detecting a change in intensity of light and indicating a particular location of the particular light sensor. The method includes determining an eye tracking characteristic of a user based on the plurality of event messages.Type: GrantFiled: September 27, 2018Date of Patent: October 19, 2021Inventors: Branko Petljanski, Raffi A. Bedikian, Daniel Kurz, Thomas Gebauer, Li Jia