Patents by Inventor Amir Hoffnung
Amir Hoffnung has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240331256Abstract: The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.Type: ApplicationFiled: June 7, 2024Publication date: October 3, 2024Inventors: Guillaume Pierre André BARLIER, Sebastian BAUER, Jeffrey T. BERNSTEIN, Alan C. DYE, Aurelio GUZMAN, Amir HOFFNUNG, Joseph A. MALIA, Nicolas SCAPEL, Christopher I. WILSON, Giancarlo YERKES
-
Patent number: 12079458Abstract: The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data.Type: GrantFiled: April 20, 2022Date of Patent: September 3, 2024Assignee: Apple Inc.Inventors: Marek Bereza, Adi Berenson, Jeffrey Traer Bernstein, Lukas Robert Tom Girling, Mark Hauenstein, Amir Hoffnung, William D. Lindmeier, Joseph A. Malia, Julian Missig
-
Publication number: 20230252659Abstract: The present disclosure generally relates to displaying and editing an image with depth information. In response to an input, an object in the image having a one or more elements in a first depth range is identified. The identified object is then isolated from other elements in the image and displayed separately from the other elements. The isolated object may then be utilized in different applications.Type: ApplicationFiled: April 20, 2023Publication date: August 10, 2023Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
-
Patent number: 11669985Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: GrantFiled: April 28, 2022Date of Patent: June 6, 2023Assignee: Apple Inc.Inventors: Matan Stauber, Amir Hoffnung, Matthaeus Krenn, Jeffrey Traer Bernstein, Joseph A. Malia, Mark Hauenstein
-
Publication number: 20220262022Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: ApplicationFiled: April 28, 2022Publication date: August 18, 2022Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
-
Publication number: 20220244838Abstract: The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data.Type: ApplicationFiled: April 20, 2022Publication date: August 4, 2022Inventors: Marek BEREZA, Adi BERENSON, Jeffrey Traer BERNSTEIN, Lukas Robert Tom GIRLING, Mark HAUENSTEIN, Amir HOFFNUNG, William D. LINDMEIER, Joseph A. MALIA, Julian MISSIG
-
Patent number: 11321857Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: GrantFiled: July 31, 2019Date of Patent: May 3, 2022Assignee: Apple Inc.Inventors: Matan Stauber, Amir Hoffnung, Matthaeus Krenn, Jeffrey Traer Bernstein, Joseph A. Malia, Mark Hauenstein
-
Patent number: 11262840Abstract: A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user (22) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye (34) of the user. 3D coordinates of a head (32) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.Type: GrantFiled: June 20, 2018Date of Patent: March 1, 2022Inventors: Eyal Bychkov, Oren Brezner, Micha Galor, Ofir Or, Jonathan Pokrass, Amir Hoffnung, Tamir Berliner
-
Patent number: 10928921Abstract: A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region.Type: GrantFiled: August 26, 2019Date of Patent: February 23, 2021Assignee: APPLE INC.Inventors: Amir Hoffnung, Micha Galor, Jonathan Pokrass, Roee Shenberg, Shlomo Zippel
-
Patent number: 10642371Abstract: A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.Type: GrantFiled: August 21, 2018Date of Patent: May 5, 2020Assignee: APPLE INC.Inventors: Micha Galor, Jonathan Pokrass, Amir Hoffnung, Ofir Or
-
Publication number: 20200105003Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.Type: ApplicationFiled: July 31, 2019Publication date: April 2, 2020Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
-
Publication number: 20190377420Abstract: A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region.Type: ApplicationFiled: August 26, 2019Publication date: December 12, 2019Inventors: Amir Hoffnung, Micha Galor, Jonathan Pokrass, Roee Shenberg, Shlomo Zippel
-
Patent number: 10444963Abstract: The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data. In some examples, depth data is used to combine two sets of image data.Type: GrantFiled: July 13, 2018Date of Patent: October 15, 2019Assignee: Apple Inc.Inventors: Marek Bereza, Adi Berenson, Jeffrey Traer Bernstein, Lukas Robert Tom Girling, Mark Hauenstein, Amir Hoffnung, William D. Lindmeier, Joseph A. Malia, Julian Missig
-
Patent number: 10429937Abstract: A gesture based user interface includes a movement monitor configured to monitor a user's hand and to provide a signal based on movements of the hand. A processor is configured to provide at least one interface state in which a cursor is confined to movement within a single dimension region responsive to the signal from the movement monitor, and to actuate different commands responsive to the signal from the movement monitor and the location of the cursor in the single dimension region.Type: GrantFiled: February 16, 2017Date of Patent: October 1, 2019Assignee: APPLE INC.Inventors: Amir Hoffnung, Micha Galor, Jonathan Pokrass, Roee Shenberg, Shlomo Zippel
-
Patent number: 10176845Abstract: Techniques and devices for creating a Forward-Reverse Loop output video and other output video variations. A pipeline may include obtaining input video and determining a start frame within the input video and a frame length parameter based on a temporal discontinuity minimization. The selected start frame and the frame length parameter may provide a reversal point within the Forward-Reverse Loop output video. The Forward-Reverse Loop output video may include a forward segment that begins at the start frame and ends at the reversal point and a reverse segment that starts after the reversal point and plays back one or more frames in the forward segment in a reverse order. The pipeline for the generating Forward-Reverse Loop output video may be part of a shared resource architecture that generates other types of output video variations, such as AutoLoop output videos and Long Exposure output videos.Type: GrantFiled: August 16, 2017Date of Patent: January 8, 2019Assignee: Apple Inc.Inventors: Arwen V. Bradley, Jason Klivington, Rudolph van der Merwe, Douglas P. Mitchell, Amir Hoffnung, Behkish J. Manzari, Charles A. Mezak, Matan Stauber, Ran Margolin, Etienne Guerard, Piotr Stanczyk
-
Publication number: 20180356898Abstract: A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.Type: ApplicationFiled: August 21, 2018Publication date: December 13, 2018Inventors: Micha Galor, Jonathan Pokrass, Amir Hoffnung, Ofir Or
-
Publication number: 20180321826Abstract: The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data. In some examples, depth data is used to combine two sets of image data.Type: ApplicationFiled: July 13, 2018Publication date: November 8, 2018Inventors: Marek BEREZA, Adi BERENSON, Jeffrey Traer BERNSTEIN, Lukas Robert Tom GIRLING, Mark HAUENSTEIN, Amir HOFFNUNG, William D. LINDMEIER, Joseph A. MALIA, Julian MISSIG
-
Publication number: 20180314329Abstract: A method, including receiving a three-dimensional (3D) map of at least a part of a body of a user (22) of a computerized system, and receiving a two dimensional (2D) image of the user, the image including an eye (34) of the user. 3D coordinates of a head (32) of the user are extracted from the 3D map and the 2D image, and a direction of a gaze performed by the user is identified based on the 3D coordinates of the head and the image of the eye.Type: ApplicationFiled: June 20, 2018Publication date: November 1, 2018Inventors: Eyal Bychkov, Oren Brezner, Micha Galor, Ofir Or, Jonathan Pokrass, Amir Hoffnung, Tamir Berliner
-
Patent number: 10088909Abstract: A method, including receiving, by a computer, a sequence of three-dimensional maps containing at least a hand of a user of the computer, and identifying, in the maps, a device coupled to the computer. The maps are analyzed to detect a gesture performed by the user toward the device, and the device is actuated responsively to the gesture.Type: GrantFiled: October 22, 2015Date of Patent: October 2, 2018Assignee: APPLE INC.Inventors: Micha Galor, Jonathan Pokrass, Amir Hoffnung, Ofir Or
-
Patent number: 10031578Abstract: A method includes receiving a sequence of three-dimensional (3D) maps of at least a part of a body of a user of a computerized system and extracting, from the 3D map, 3D coordinates of a head of the user. Based on the 3D coordinates of the head, a direction of a gaze performed by the user and an interactive item presented in the direction of the gaze on a display coupled to the computerized system are identified. An indication is extracted from the 3D maps an indication that the user is moving a limb of the body in a specific direction, and the identified interactive item is repositioned on the display responsively to the indication.Type: GrantFiled: September 5, 2016Date of Patent: July 24, 2018Assignee: APPLE INC.Inventors: Eyal Bychkov, Oren Brezner, Micha Galor, Ofir Or, Jonathan Pokrass, Amir Hoffnung, Tamir Berliner