Patents by Inventor Daniel John Wigdor
Daniel John Wigdor has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11656693Abstract: An electronic device tracks, for a user performing a target acquisition movement within a 3D space, movement parameters of a plurality of input devices of the user. The electronic device predicts, for the user, a region of interest within the 3D space, based on the movement parameters. The region of interest includes a plurality of targets in close proximity. The electronic device predicts an endpoint of the target acquisition movement, within the region of interest. In some embodiments, the plurality of input devices includes an eye tracking input device, each input device corresponds to a predefined input device type, and the movement parameters include gaze data from the eye tracking input device. In some embodiments, input devices includes an eye tracking input device, a head-mounted display, and a hand-held controller, and the user's eye, hand, and head movements are coordinated.Type: GrantFiled: January 5, 2022Date of Patent: May 23, 2023Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Rorik Henrikson, Tovi Samuel Grossman, Sean Edwin Trowbridge, Hrvoje Benko, Daniel John Wigdor, Marcello Giordano, Michael Glueck, Tanya Renee Jonker, Aakar Gupta, Stephanie Santosa, Carolina Brum Medeiros, Daniel Clarke
-
Patent number: 11579704Abstract: The disclosed computer-implemented method may include detecting, by a computing system, a gesture that appears to be intended to trigger a response by the computing system, identifying, by the computing system, a context in which the gesture was performed, and adjusting, based at least on the context in which the gesture was performed, a threshold for determining whether to trigger the response to the gesture in a manner that causes the computing system to perform an action that is based on the detected gesture.Type: GrantFiled: March 24, 2021Date of Patent: February 14, 2023Assignee: Meta Platforms Technologies, LLCInventors: Benjamin Lafreniere, Tanya Renee Jonker, Stephanie Santosa, Hrvoje Benko, Daniel John Wigdor
-
Publication number: 20220308675Abstract: The disclosed computer-implemented method may include detecting, by a computing system, a gesture that appears to be intended to trigger a response by the computing system, identifying, by the computing system, a context in which the gesture was performed, and adjusting, based at least on the context in which the gesture was performed, a threshold for determining whether to trigger the response to the gesture in a manner that causes the computing system to perform an action that is based on the detected gesture.Type: ApplicationFiled: March 24, 2021Publication date: September 29, 2022Inventors: Benjamin Lafreniere, Tanya Renee Jonker, Stephanie Santosa, Hrvoje Benko, Daniel John Wigdor
-
Patent number: 11449138Abstract: A method of saccade-based positioning of a radial user interface includes performing operations while a first radial user interface is displayed on a display. The operations include (a) detecting a saccade movement based on eye-tracking data received from an eye-tracking system; (b) determining whether the saccade movement crosses a first region border corresponding to a first region of the first radial user interface; (c) determining a velocity of the saccade movement based on the eye-tracking data; and (d) dynamically determining a location for a subsequent radial user interface in response to the velocity and in response to determining that the saccade movement crosses the first region border. The method also includes displaying, on the display, the subsequent radial user interface at the location.Type: GrantFiled: May 25, 2021Date of Patent: September 20, 2022Assignee: Meta Platforms Technologies, LLCInventors: Marcello Giordano, Mark Parent, Daniel John Wigdor, Stephanie Santosa, Tovi Samuel Grossman, Sunggeun Ahn
-
Publication number: 20220199079Abstract: In one embodiment, a system includes an automatic speech recognition (ASR) module, a natural-language understanding (NLU) module, a dialog manager, one or more agents, an arbitrator, a delivery system, one or more processors, and a non-transitory memory coupled to the processors comprising instructions executable by the processors, the processors operable when executing the instructions to receive a user input, process the user input using the ASR module, the NLU module, the dialog manager, one or more of the agents, the arbitrator, and the delivery system, and provide a response to the user input.Type: ApplicationFiled: November 11, 2021Publication date: June 23, 2022Inventors: Michael Robert Hanson, Swati Goel, Leif Haven Martinson, Megha Tiwari, Megha Jhunjhunwala, Ilana Orly Shalowitz, Nicholas Jorge Flores, Kyle Archie, Piyush Khemka, Seungwhan Moon, Kai Sun, Mark Parent, Michael Glueck, Jackson Rushing, Daniel John Wigdor, Stephanie Santosa, Christopher De Paoli
-
Publication number: 20220129088Abstract: An electronic device tracks, for a user performing a target acquisition movement within a 3D space, movement parameters of a plurality of input devices of the user. The electronic device predicts, for the user, a region of interest within the 3D space, based on the movement parameters. The region of interest includes a plurality of targets in close proximity. The electronic device predicts an endpoint of the target acquisition movement, within the region of interest. In some embodiments, the plurality of input devices includes an eye tracking input device, each input device corresponds to a predefined input device type, and the movement parameters include gaze data from the eye tracking input device. In some embodiments, input devices includes an eye tracking input device, a head-mounted display, and a hand-held controller, and the user's eye, hand, and head movements are coordinated.Type: ApplicationFiled: January 5, 2022Publication date: April 28, 2022Inventors: Rorik Henrikson, Tovi Samuel Grossman, Sean Edwin Trowbridge, Hrvoje Benko, Daniel John Wigdor, Marcello Giordano, Michael Glueck, Tanya Renee Jonker, Aakar Gupta, Stephanie Santosa, Carolina Brum Medeiros, Daniel Clarke
-
Patent number: 11256342Abstract: An electronic device tracks, for a user performing a target acquisition movement within a 3D space, movement parameters of a plurality of input devices of the user. The electronic device predicts, for the user, a region of interest within the 3D space, using a regression model, based on the movement parameters. The region of interest includes a plurality of targets in close proximity. The electronic device predicts an endpoint of the target acquisition movement, within the region of interest, using a pointer facilitation technique. In some embodiments, the plurality of input devices includes an eye tracking input device, each input device corresponds to a predefined input device type, and the movement parameters include gaze data from the eye tracking input device. In some embodiments, input devices includes an eye tracking input device, a head-mounted display, and a hand-held controller, and the user's eye, hand, and head movements are coordinated.Type: GrantFiled: September 15, 2020Date of Patent: February 22, 2022Assignee: FACEBOOK TECHNOLOGIES, LLCInventors: Rorik Henrikson, Tovi Samuel Grossman, Sean Edwin Trowbridge, Hrvoje Benko, Daniel John Wigdor, Marcello Giordano, Michael Glueck, Tanya Renee Jonker, Aakar Gupta, Stephanie Santosa, Carolina Brum Medeiros, Daniel Clarke
-
Patent number: 9880386Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: GrantFiled: January 9, 2014Date of Patent: January 30, 2018Assignee: Microsoft Technology Licensing, LLCInventors: John Clavin, Megan Lesley Tedesco, Daniel John Wigdor
-
Publication number: 20140126066Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: ApplicationFiled: January 9, 2014Publication date: May 8, 2014Inventors: JOHN CLAVIN, MEGAN LESLEY TEDESCO, DANIEL JOHN WIGDOR
-
Patent number: 8670183Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: GrantFiled: March 7, 2011Date of Patent: March 11, 2014Assignee: Microsoft CorporationInventors: John Clavin, Megan Lesley Tedesco, Daniel John Wigdor
-
Publication number: 20120229909Abstract: A head-mounted display (HMD) provides an augmented view of advertisements to an HMD wearer. In some embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to provide additional information and/or to personalize the advertisement to the HMD wearer. In other embodiments, when an advertisement is within an HMD wearer's field of view, the HMD may augment the HMD wearer's view of the advertisement to remove the advertisement from the HMD wearer's view or to replace the content of the advertisement with non-advertising content.Type: ApplicationFiled: March 7, 2011Publication date: September 13, 2012Applicant: MICROSOFT CORPORATIONInventors: JOHN CLAVIN, MEGAN LESLEY TEDESCO, DANIEL JOHN WIGDOR
-
Publication number: 20120092381Abstract: An invention is disclosed for using touch gestures to zoom a video to full-screen. As the user reverse-pinches on a touch-sensitive surface to zoom in on a video, the invention tracks the amount of a zoom. When the user has zoomed to the point where one of the dimensions (height or width) of the video reaches a threshold (such as some percentage of a dimension of the display device—e.g. the width of the video reaches 80% of the width of the display device), the invention determines to display the video in full-screen, and “snaps” the video to full-screen. The invention may do this by way of an animation, such as expanding the video to fill the screen.Type: ApplicationFiled: October 19, 2010Publication date: April 19, 2012Applicant: Microsoft CorporationInventors: Paul Armistead Hoover, Vishnu Sivaji, Jarrod Lombardo, Daniel John Wigdor
-
Publication number: 20110138284Abstract: A touch screen input device is provided which simulates a 3-state input device such as a mouse. One of these states is used to preview the effect of activating a graphical user interface element when the screen is touched. In this preview state touching a graphical user interface element on the screen with a finger or stylus does not cause the action associated with that element to be performed. Rather, when the screen is touched while in the preview state audio cues are provided to the user indicating what action would arise if the action associated with the touched element were to be performed.Type: ApplicationFiled: December 3, 2009Publication date: June 9, 2011Applicant: MICROSOFT CORPORATIONInventors: Daniel John Wigdor, Jarrod Lombardo, Annuska Zolyomi Perkins, Sean Hayes