Patents by Inventor Sachi Mizobuchi
Sachi Mizobuchi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11921931Abstract: Methods and systems for discrete control of user interface elements such as graphical widgets are described. A first plurality of video frames are captured and processed to recognize hand gestures therein. A dragging mode is activated in response to a dragging mode activation hand gesture in the first plurality of frames. A second plurality of video frames are captured and processed to allow recognition of hand gestures therein. Dragging, an element of a user interface control, in response to recognition of a discrete control hand gesture in the second plurality of frames. The discrete control of a user interface control using mid-air hand gestures allows precise setting of system parameters associated with the user interface control.Type: GrantFiled: December 17, 2020Date of Patent: March 5, 2024Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wei Zhou, Sachi Mizobuchi, Rafael Veras Guimaraes, Ghazaleh Saniee-Monfared, Wei Li
-
Patent number: 11809637Abstract: Methods, devices, and processor-readable media for adjusting the control-display gain of a gesture-controlled device are described. Adjusting the control-display gain may facilitate user interaction with content or UI elements rendered on a display screen of the gesture-controlled device. The control-display gain may be adjusted based on a property of how a mid-air dragging gesture is being performed by a user's hand. The property may be the location of the gesture, the orientation of the hand performing the gesture, or the velocity of the gesture. A hand that becomes stationary for a threshold time period while performing the dragging gesture may adjust the control-display gain to a different level. Control-display gain may be set to a different value based on the current velocity of the hand performing the gesture. The control-display gain levels may be selected from a continuous range of values or a set of discrete values. Devices for performing the methods are described.Type: GrantFiled: September 13, 2022Date of Patent: November 7, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wei Li, Wei Zhou, Sachi Mizobuchi, Ghazaleh Saniee-Monfared, Juwei Lu, Taslim Arefin Khan, Rafael Veras Guimaraes
-
Patent number: 11797081Abstract: Methods, devices, and media are disclosed for mapping of input and output spaces in head-based human-computer interactions. In some embodiments, an end-to-end method is described for designing a head-based user interface, calibrating the interface to individual users, and interacting with a user in real time by mapping head-based user inputs to an output space in a way that optimizes the target selection efficiency of the interaction. Head orientation may be leveraged to define the mapping between the user input and the output space.Type: GrantFiled: August 20, 2021Date of Patent: October 24, 2023Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wei Zhou, Mona Hosseinkhani Loorak, Ghazaleh Saniee-Monfared, Sachi Mizobuchi, Xiu Yi, Pourang Polad Irani, Jian Zhao, Wei Li
-
Publication number: 20230222932Abstract: Methods, systems and media for context-aware estimation of student attention in online learning are described. An attention monitoring system filters or restricts the time periods in which student attention is monitored or assessed to those time periods in which student attention is important. These time periods of high attention importance may be determined by processing data from the teacher, such as audio data representing the teacher's voice and/or visual presentation data representing slides or other visual material being presented to the students. Various types of presenter data from the teacher and attendee data from the students may be used in assessing the importance of attention and each student's attention during each time period. The presenter may be provided with feedback in various forms showing student attention performance aggregated or segmented according to various criteria.Type: ApplicationFiled: March 14, 2023Publication date: July 13, 2023Inventors: Wei ZHOU, Mona HOSSEINKHANI LOORAK, Sachi MIZOBUCHI, XIU YI, Juntao YE, Liang HU, Wei LI
-
Patent number: 11693551Abstract: Devices and methods for providing feedback for a control display (CD) gain of a slider control on a gesture-controlled device are described. The method includes detecting a speed of a dynamic dragging gesture, determining the CD gain for the slider control based on the speed, and generating an auditory feedback or a visual feedback for the CD gain. A gesture-controlled device, which carries out the method, may have an image-capturing device, a processor, and a memory coupled to the processor, the memory storing machine-executable instructions for carrying out the method. Advantageously, providing feedback for CD gain of a slider control facilitates accurate adjustment of a system parameter associated with the slider control. The devices and methods may be used in industrial applications, vehicle control among other applications.Type: GrantFiled: May 21, 2021Date of Patent: July 4, 2023Assignee: Huawei Technologies Co., Ltd.Inventors: Wei Zhou, Sachi Mizobuchi, Gaganpreet Singh, Ghazaleh Saniee-Monfared, Sitao Wang, Futian Zhang, Wei Li
-
Publication number: 20230116341Abstract: Methods and apparatuses for controlling a selection focus of a user interface using gestures, in particular mid-air hand gestures, are described. A hand is detected within a defined activation region in a first frame of video data. The detected hand is tracked to determine a tracked location of the detected hand in at least a second frame of video data. A control signal is outputted to control the selection focus to focus on a target in the user interface, where movement of the selection focus is controlled based on a displacement between the tracked location and a reference location in the activation region.Type: ApplicationFiled: September 30, 2022Publication date: April 13, 2023Inventors: Futian ZHANG, Edward LANK, Sachi MIZOBUCHI, Gaganpreet SINGH, Wei ZHOU, Wei LI
-
Publication number: 20230078074Abstract: Methods, devices, and processor-readable media for hand-on-wheel gesture controls are described. A steering wheel is virtually segmented into a plurality of radial segments. Each segment is assigned a semantic meaning. Control commands related to control functions are mapped to different virtual segments of the steering wheel. When a user performs a gesture on the steering wheel, the system recognizes the gesture and selects a control function based on the location of the hand relative to the steering wheel. In various embodiments, on-wheel hand gestures, on-wheel hand location, and voice commands may be used in various combinations to enable a user to perform a wide selection of functions using a small number of unique commands.Type: ApplicationFiled: November 21, 2022Publication date: March 16, 2023Inventors: Wenshu LUO, Sachi MIZOBUCHI, Wei ZHOU, Wei LI
-
Publication number: 20230059153Abstract: Methods, devices, and media are disclosed for mapping of input and output spaces in head-based human-computer interactions. In some embodiments, an end-to-end method is described for designing a head-based user interface, calibrating the interface to individual users, and interacting with a user in real time by mapping head-based user inputs to an output space in a way that optimizes the target selection efficiency of the interaction. Head orientation may be leveraged to define the mapping between the user input and the output space.Type: ApplicationFiled: August 20, 2021Publication date: February 23, 2023Inventors: Wei ZHOU, Mona HOSSEINKHANI LOORAK, Ghazaleh SANIEE-MONFARED, Sachi MIZOBUCHI, Xiu YI, Pourang Polad IRANI, Jian ZHAO, Wei LI
-
Publication number: 20230013169Abstract: Methods, devices, and processor-readable media for adjusting the control-display gain of a gesture-controlled device are described. Adjusting the control-display gain may facilitate user interaction with content or UI elements rendered on a display screen of the gesture-controlled device. The control-display gain may be adjusted based on a property of how a mid-air dragging gesture is being performed by a user's hand. The property may be the location of the gesture, the orientation of the hand performing the gesture, or the velocity of the gesture. A hand that becomes stationary for a threshold time period while performing the dragging gesture may adjust the control-display gain to a different level. Control-display gain may be set to a different value based on the current velocity of the hand performing the gesture. The control-display gain levels may be selected from a continuous range of values or a set of discrete values. Devices for performing the methods are described.Type: ApplicationFiled: September 13, 2022Publication date: January 19, 2023Inventors: Wei LI, Wei ZHOU, Sachi MIZOBUCHI, Ghazaleh SANIEE-MONFARED, Juwei LU, Taslim Arefin KHAN, Rafael VERAS GUIMARAES
-
Publication number: 20220374138Abstract: Devices and methods for providing feedback for a control display (CD) gain of a slider control on a gesture-controlled device are described. The method includes detecting a speed of a dynamic dragging gesture, determining the CD gain for the slider control based on the speed, and generating an auditory feedback or a visual feedback for the CD gain. A gesture-controlled device, which carries out the method, may have an image-capturing device, a processor, and a memory coupled to the processor, the memory storing machine-executable instructions for carrying out the method. Advantageously, providing feedback for CD gain of a slider control facilitates accurate adjustment of a system parameter associated with the slider control. The devices and methods may be used in industrial applications, vehicle control among other applications.Type: ApplicationFiled: May 21, 2021Publication date: November 24, 2022Inventors: Wei ZHOU, Sachi MIZOBUCHI, Gaganpreet SINGH, Ghazaleh SANIEE-MONFARED, Sitao WANG, Futian ZHANG, Wei LI
-
Patent number: 11507194Abstract: Methods, devices, and processor-readable media for hand-on-wheel gesture controls are described. A steering wheel is virtually segmented into a plurality of radial segments. Each segment is assigned a semantic meaning. Control commands related to control functions are mapped to different virtual segments of the steering wheel. When a user performs a gesture on the steering wheel, the system recognizes the gesture and selects a control function based on the location of the hand relative to the steering wheel. In various embodiments, on-wheel hand gestures, on-wheel hand location, and voice commands may be used in various combinations to enable a user to perform a wide selection of functions using a small number of unique commands.Type: GrantFiled: December 2, 2020Date of Patent: November 22, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wenshu Luo, Sachi Mizobuchi, Wei Zhou, Wei Li
-
Patent number: 11474614Abstract: Methods, devices, and processor-readable media for adjusting the control-display gain of a gesture-controlled device are described. Adjusting the control-display gain may facilitate user interaction with content or UI elements rendered on a display screen of the gesture-controlled device. The control-display gain may be adjusted based on a property of how a mid-air dragging gesture is being performed by a user's hand. The property may be the location of the gesture, the orientation of the hand performing the gesture, or the velocity of the gesture. A hand that becomes stationary for a threshold time period while performing the dragging gesture may adjust the control-display gain to a different level. Control-display gain may be set to a different value based on the current velocity of the hand performing the gesture. The control-display gain levels may be selected from a continuous range of values or a set of discrete values. Devices for performing the methods are described.Type: GrantFiled: October 30, 2020Date of Patent: October 18, 2022Assignee: HUAWEI TECHNOLOGIES CO., LTD.Inventors: Wei Li, Wei Zhou, Sachi Mizobuchi, Ghazaleh Saniee-Monfared, Juwei Lu, Taslim Arefin Khan, Rafael Veras Guimaraes
-
Publication number: 20220319063Abstract: The present disclosure relates to a video conferencing system which provides non-obtrusive feedback to a presenter during a video conference to improve interactivity among participants of video conference. An image sensor, such as a camera, captures a sequence of images (i.e., a video) of the participant while a participant of the video conference is moving their port or a part of their body and sends the images to video conferencing server. The video conferencing server processes the images to recognize a type of gesture performed by the participant and selects an ambient graphic that corresponds to the recognized gesture. The video conferencing server sends the ambient graphic to a client device associated with the presenter. The client device associated with the presenter renders or displays the ambient graphic on a display screen of the client device without obscuring information displayed on the display screen of the client device.Type: ApplicationFiled: June 23, 2022Publication date: October 6, 2022Inventors: Sachi MIZOBUCHI, Xuan ZHOU
-
Publication number: 20220197494Abstract: Devices and methods for controlling an electronic device are provided. The methods include recognizing a two finger input gesture on at least two touch sensitive surfaces of the electronic device and manipulating a slider control rendered on a display in response to recognizing the two finger input gesture. Recognizing the two finger input gesture includes detecting a first input gesture, including a swipe gesture on a first touch sensitive surface and detecting a second input gesture, including a static touch at a location on a second touch sensitive surface. The first input gesture and the second input gesture happen in a same time period. The static touch location may be a soft button of a plurality of buttons and the slider control may be rendered in response to the static touch on the soft button. The method may be used in an electronic device having multiple touchscreen displays.Type: ApplicationFiled: December 18, 2020Publication date: June 23, 2022Inventors: Wei LI, Sachi MIZOBUCHI
-
Publication number: 20220197392Abstract: Methods and systems for discrete control of user interface elements such as graphical widgets are described. A first plurality of video frames are captured and processed to recognize hand gestures therein. A dragging mode is activated in response to a dragging mode activation hand gesture in the first plurality of frames. A second plurality of video frames are captured and processed to allow recognition of hand gestures therein. Dragging, an element of a user interface control, in response to recognition of a discrete control hand gesture in the second plurality of frames. The discrete control of a user interface control using mid-air hand gestures allows precise setting of system parameters associated with the user interface control.Type: ApplicationFiled: December 17, 2020Publication date: June 23, 2022Inventors: Wei ZHOU, Sachi MIZOBUCHI, Rafael VERAS GUIMARAES, Ghazaleh SANIEE-MONFARED, Wei LI
-
Publication number: 20220171465Abstract: Methods, devices, and processor-readable media for hand-on-wheel gesture controls are described. A steering wheel is virtually segmented into a plurality of radial segments. Each segment is assigned a semantic meaning. Control commands related to control functions are mapped to different virtual segments of the steering wheel. When a user performs a gesture on the steering wheel, the system recognizes the gesture and selects a control function based on the location of the hand relative to the steering wheel. In various embodiments, on-wheel hand gestures, on-wheel hand location, and voice commands may be used in various combinations to enable a user to perform a wide selection of functions using a small number of unique commands.Type: ApplicationFiled: December 2, 2020Publication date: June 2, 2022Inventors: Wenshu LUO, Sachi MIZOBUCHI, Wei ZHOU, Wei LI
-
Publication number: 20210333884Abstract: Methods, devices, and processor-readable media for adjusting the control-display gain of a gesture-controlled device are described. Adjusting the control-display gain may facilitate user interaction with content or UI elements rendered on a display screen of the gesture-controlled device. The control-display gain may be adjusted based on a property of how a mid-air dragging gesture is being performed by a user's hand. The property may be the location of the gesture, the orientation of the hand performing the gesture, or the velocity of the gesture. A hand that becomes stationary for a threshold time period while performing the dragging gesture may adjust the control-display gain to a different level. Control-display gain may be set to a different value based on the current velocity of the hand performing the gesture. The control-display gain levels may be selected from a continuous range of values or a set of discrete values. Devices for performing the methods are described.Type: ApplicationFiled: October 30, 2020Publication date: October 28, 2021Inventors: Wei LI, Wei ZHOU, Sachi MIZOBUCHI, Ghazaleh SANIEE-MONFARED, Juwei LU, Taslim Arefin KHAN, Rafael VERAS GUIMARAES
-
Publication number: 20210294485Abstract: Generating a user timeline for an electronic device (ED) userData is collected that includes: location data; application data; and activity data. Occurrences of predetermined types of observed events are detected based on the collected data. For each detected occurrence a respective observed event record is stored that includes information about a time and type of the observed event. Planned event records, each including information about a time and type of a respective planned event, are stored for planned events that the user is scheduled to participate in, the planned event records. Events are predicted based on the observed event records and the planned event records. Information about observed, planned and predicted events are output on a timeline user interface based on observed event records, planned event records and predicted event records stored for the predicted events, respectively.Type: ApplicationFiled: April 15, 2021Publication date: September 23, 2021Inventors: Wei LI, Sachi MIZOBUCHI, Qiang XU, Wei ZHOU, Jianpeng XU
-
Touch screen user interface featuring stroke-based object selection and functional object activation
Patent number: 7554530Abstract: A method is disclosed to operate a touch screen user interface. The method includes forming a stroke that encloses an area that contains at least a portion of at least one displayed object; and selecting the at least one displayed object. Forming the stroke may further include extending the stroke to a functional object, and activating the functional object with the at least one selected displayed object. If the stroke does not define an area that is totally enclosed by the stroke, the method may further include automatically continuing the stroke such that the area is totally enclosed by the stroke. In this case the stroke may be automatically continued by drawing a line that connects a stroke starting point to a stroke ending point, and by adding touch screen coordinates covered by the line to a list of touch screen coordinates that describe the stroke.Type: GrantFiled: December 23, 2002Date of Patent: June 30, 2009Assignee: Nokia CorporationInventors: Sachi Mizobuchi, Eigo Mori -
Patent number: 7142231Abstract: This invention provides a handset (10) and a method of operating a handset. The handset includes a user interface that contains a data entry device (20) and a visual display device (14), a camera (16) and a controller (12) coupled to the visual display device and to the camera. The controller operates under the control of a stored program (22A) for displaying to a user an image representative of at least a portion of an environment of the user as seen through the camera during a time that the user is interacting with the user interface. The controller may further operate under control of the stored program to process images generated by the camera to detect a potential for a collision with an object that is present in the environment of the user, and to warn the user of the potential for a collision.Type: GrantFiled: December 29, 2003Date of Patent: November 28, 2006Assignee: Nokia CorporationInventors: Jan Chipchase, Sachi Mizobuchi, Makoto Sugano, Heikki Waris