Patents by Inventor Tawfik Sharkasi
Tawfik Sharkasi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11703994Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: April 28, 2022Date of Patent: July 18, 2023Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11461955Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: August 23, 2021Date of Patent: October 4, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Publication number: 20220300071Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. A system presents a mixed-reality environment to a user with a mixed-reality display device, displays a control ray as a hologram of a line extending away from the user control within the mixed-reality environment, and obtains a control ray activation variable associated with a user control. The control ray activation variable includes a velocity or acceleration of the user control. After displaying the control ray within the mixed-reality environment, and in response to determining that, the control ray activation variable exceeds a predetermined threshold, the system selectively disables display of the control ray within the mixed-reality environment.Type: ApplicationFiled: June 8, 2022Publication date: September 22, 2022Inventors: Julia SCHWARZ, Sheng Kai TANG, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Sophie STELLMACH
-
Publication number: 20220253199Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: ApplicationFiled: April 28, 2022Publication date: August 11, 2022Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
-
Patent number: 11397463Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.Type: GrantFiled: March 8, 2019Date of Patent: July 26, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
-
Patent number: 11320957Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: GrantFiled: March 25, 2019Date of Patent: May 3, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Joshua Kyle Neff, Alton Kwok
-
Patent number: 11294472Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: GrantFiled: March 26, 2019Date of Patent: April 5, 2022Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Thomas Matthew Gable, Casey Leon Meekhof, Chuan Qin, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Joshua Kyle Neff, Jamie Bryant Kirschenbaum, Neil Richard Kronlage
-
Publication number: 20210383594Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: August 23, 2021Publication date: December 9, 2021Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Patent number: 11107265Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: GrantFiled: March 11, 2019Date of Patent: August 31, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sheng Kai Tang, Julia Schwarz, Jason Michael Ray, Sophie Stellmach, Thomas Matthew Gable, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Nicholas Ferianc Kamuda, Ramiro S. Torres, Kevin John Appel, Jamie Bryant Kirschenbaum
-
Patent number: 10890967Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.Type: GrantFiled: July 9, 2018Date of Patent: January 12, 2021Assignee: Microsoft Technology Licensing, LLCInventors: Sophie Stellmach, Sheng Kai Tang, Casey Leon Meekhof, Julia Schwarz, Nahil Tawfik Sharkasi, Thomas Matthew Gable
-
Publication number: 20200225736Abstract: Systems and methods are provided for selectively enabling or disabling control rays in mixed-reality environments. In some instances, a mixed-reality display device presents a mixed-reality environment to a user which includes one or more holograms. The display device then detects a user gesture input associated with a user control (which may include a part of the user's body) during presentation of the mixed-reality environment. In response to detecting the user gesture, the display device selectively generates and displays a corresponding control ray as a hologram rendered by the display device extending away from the user control within the mixed-reality environment. Gestures may also be detected for selectively disabling control rays so that they are no longer rendered.Type: ApplicationFiled: March 8, 2019Publication date: July 16, 2020Inventors: Julia Schwarz, Sheng Kai Tang, Casey Leon Meekhof, Nahil Tawfik Sharkasi, Sophie Stellmach
-
Publication number: 20200226814Abstract: A head-mounted display comprises a display device and an outward-facing depth camera. A storage machine comprises instructions executable by a logic machine to present one or more virtual objects on the display device, to receive information from the depth camera about an environment, and to determine a position of the head-mounted display within the environment. Based on the position of the head-mounted display, a position of a joint of a user's arm is inferred. Based on the information received from the depth camera, a position of a user's hand is determined. A ray is cast from a portion of the user's hand based on the position of the joint of the user's arm and the position of the user's hand. Responsive to the ray intersecting with one or more control points of a virtual object, the user is provided with an indication that the virtual object is being targeted.Type: ApplicationFiled: March 11, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Jason Michael RAY, Sophie STELLMACH, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Kevin John APPEL, Jamie Bryant KIRSCHENBAUM
-
Publication number: 20200225830Abstract: A computing system is provided. The computing system includes a head mounted display (HMD) device including a display, a processor configured to execute one or more programs, and associated memory. The processor is configured to display a virtual object at least partially within a field of view of a user on the display, identify a plurality of control points associated with the virtual object, and determine that one or more of the control points associated with the virtual object are further than a predetermined threshold distance from the user. The processor is configured to, based on the determination, invoke a far interaction mode for the virtual object and receive a trigger input from the user. In response to the trigger input in the far interaction mode, the processor is configured to invoke a near interaction mode and display a virtual interaction object within the predetermined threshold distance from the user.Type: ApplicationFiled: March 25, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Joshua Kyle NEFF, Alton KWOK
-
Publication number: 20200225758Abstract: A method for augmenting a two-stage hand gesture input comprises receiving hand tracking data for a hand of a user. A gesture recognition machine recognizes that the user has performed a first-stage gesture based on one or more parameters derived from the received hand tracking data satisfying first-stage gesture criteria. An affordance cueing a second-stage gesture is provided to the user responsive to recognizing the first-stage gesture. The gesture recognition machine recognizes that the user has performed the second-stage gesture based on one or more parameters derived from the received hand tracking data satisfying second-stage gesture criteria. A graphical user interface element is displayed responsive to recognizing the second-stage gesture.Type: ApplicationFiled: March 26, 2019Publication date: July 16, 2020Applicant: Microsoft Technology Licensing, LLCInventors: Sheng Kai TANG, Julia SCHWARZ, Thomas Matthew GABLE, Casey Leon MEEKHOF, Chuan QIN, Nahil Tawfik SHARKASI, Nicholas Ferianc KAMUDA, Ramiro S. TORRES, Joshua Kyle NEFF, Jamie Bryant KIRSCHENBAUM, Neil Richard KRONLAGE
-
Publication number: 20200012341Abstract: A method for improving user interaction with a virtual environment includes presenting the virtual environment to a user on a display, measuring a gaze location of a user's gaze relative to the virtual environment, casting an input ray from an input device, measuring an input ray location at a distal point of the input ray, and snapping a presented ray location to the gaze location when the input ray location is within a snap threshold distance of the input ray location.Type: ApplicationFiled: July 9, 2018Publication date: January 9, 2020Inventors: Sophie STELLMACH, Sheng Kai TANG, Casey Leon MEEKHOF, Julia SCHWARZ, Nahil Tawfik SHARKASI, Thomas Matthew GABLE
-
Publication number: 20170238576Abstract: A gum base comprises nanoparticles and/or non-uniform microparticles containing at least one crosslinked polymer. Non-uniform microparticles may be in the form of polymer composite microparticles, hollow shell microparticles and/or core-shell microparticles. It has been found that such gum bases exhibit desirable chewing properties similar to conventional gum bases. Cuds formed by chewing gum bases containing crosslinked polymer microparticles are easily removable from environmental surfaces such as concrete, fabrics and flooring materials.Type: ApplicationFiled: November 22, 2010Publication date: August 24, 2017Inventors: Xiaohu Xia, Tawfik Sharkasi, Philip Shepherd
-
Patent number: 9185854Abstract: A configurable fluid retention apparatus includes a plurality of detachable panels configured to interlock with each other to form an outer structure of the configurable fluid retention apparatus, an internal bladder secured to at least one of the plurality of detachable panels, and comprising an intake connection, an outlet connection, and an air relief device. An intake valve is fluidly coupled to the internal bladder at the intake connection and configured to allow unidirectional fluid flow. Each of the plurality of detachable panels includes a plurality of interlock members and interlock gaps of substantially equal widths positioned along a perimeter of each detachable panel, a plurality of variable connection openings extending through the body of each detachable panel, and the orientation panel comprises an orientation portion comprised of two interlock members extending substantially perpendicular to each other.Type: GrantFiled: August 19, 2013Date of Patent: November 17, 2015Inventors: Omar Galal, Tawfik Sharkasi
-
Publication number: 20150048082Abstract: A configurable fluid retention apparatus includes a plurality of detachable panels configured to interlock with each other to form an outer structure of the configurable fluid retention apparatus, an internal bladder secured to at least one of the plurality of detachable panels, and comprising an intake connection, an outlet connection, and an air relief device. An intake valve is fluidly coupled to the internal bladder at the intake connection and configured to allow unidirectional fluid flow. Each of the plurality of detachable panels includes a plurality of interlock members and interlock gaps of substantially equal widths positioned along a perimeter of each detachable panel, a plurality of variable connection openings extending through the body of each detachable panel, and the orientation panel comprises an orientation portion comprised of two interlock members extending substantially perpendicular to each other.Type: ApplicationFiled: August 19, 2013Publication date: February 19, 2015Inventors: Omar Galal, Tawfik Sharkasi
-
Patent number: D729056Type: GrantFiled: February 27, 2014Date of Patent: May 12, 2015Assignee: WM. Wrigley Jr. CompanyInventors: Sarah She, Lou Massari, Tawfik Sharkasi
-
Patent number: D729057Type: GrantFiled: February 27, 2014Date of Patent: May 12, 2015Assignee: WM. Wrigley Jr. CompanyInventors: Sarah She, Lou Massari, Tawfik Sharkasi