TOUCHLESS USER INTERFACE NAVIGATION USING GESTURES
An example method includes displaying, by a display (104) of a wearable device (100), a content card (114B); receiving, by the wearable device, motion data generated by a motion sensor (102) of the wearable device that represents motion of a forearm of a user of the wearable device; responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card (114C); and responsive to determining, based on the motion data, that the user has performed a movement that includes a supination of the forearm followed by a pronation of the forearm at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card (114A).
This application is a continuation of U.S. application Ser. No. 14/791,291, filed Jul. 3, 2015, the entire contents of each of which are hereby incorporated by reference.
BACKGROUNDSome wearable computing devices (e.g., smart watches, activity trackers, heads-up display devices, etc.) output graphical content for display. For example, a wearable computing device may present a graphical user interface (GUI) including one or more graphical elements that contain information. As a user interacts with a GUI that contains visual indications of content, the wearable computing device may receive input (e.g., speech input, touch input, etc.). However, when interacting with the GUI, it may be difficult for a user to provide speech input, touch input, or other conventional types of input that may require a user to focus and/or exhibit precise control. For example, the user may be immersed in activity (e.g., having a face-to-face conversation, riding a bicycle, etc.) or attending an event (e.g., a concert, a movie, a meeting, an educational class, etc.) that prevents a user from speaking voice-commands into a microphone or providing specific touch inputs at a screen.
The details of one or more examples of the disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages will be apparent from the description and drawings, and from the claims.
In general, techniques of this disclosure may enable a wearable computing device (e.g., smart watches, activity trackers, heads-up display devices, etc.) to detect movement associated with the wearable computing device, and, in response to detecting a particular movement that approximates a predefined movement, output an altered presentation and/or arrangement of content cards displayed at a display component of the wearable computing device. For example, a wearable computing device (referred to herein simply as a “wearable”) may output a graphical user interface (GUI) for presentation at a display (e.g., a display of the wearable). The GUI may include a list of content cards and each of the content cards may contain information (e.g., text, graphics, etc.) that is viewable at the display. In some implementations, only information associated with a current content card from the list may be visible at a given time, while information associated with the other content cards from the list may be not be visible at the given time.
Rather than requiring the user to provide a voice-command (e.g., by speaking the word “next” into a microphone of the wearable) or provide touch inputs (e.g., by tapping or sliding on a screen of the wearable) to instruct the wearable to update the GUI such that information associated with one or more of the other content cards is visible to the user, the wearable may enable the user to provide specific movements to cause the wearable to update the GUI, thereby enabling the user to navigate through the list of content cards. A motion sensor of the wearable may detect movement associated with the wearable itself (e.g., as the user moves and twists the body part or piece of clothing to which the wearable is attached). After detecting movement that corresponds to a predefined movement associated with a particular user interface navigation direction through the list, the wearable may select a card in the particular user interface navigation direction, and output the selected card for display. For example, if the user causes the wearable to move with a specific change in direction, speed, acceleration, rotation, etc., over a certain period of time (e.g., one second) the wearable may cause the display to replace, at the display, a current content card with a different content card from the list.
In this manner, techniques of this disclosure may enable a user to more quickly and easily view different content cards in a list by providing certain, easy-to-perform movements that may require less user focus or control than other types of inputs. Unlike other types of wearable devices that rely primarily on speech, touch, or other types of input, a wearable configured according to techniques of this disclosure can enable a user to more quickly and intuitively navigate through a list of content cards, even if the user is immersed in other activities. For example, even if a user is using his or her hands to cook, is standing in line at an airport, or is otherwise performing an activity that makes providing voice commands or touch inputs difficult, the user can easily navigate through a list of content cards displayed at a wearable device simply by moving himself or herself (and thus the wearable) according to a predetermined movement pattern.
As shown in
Attachment component 116 may include a physical portion of a wearable computing device that comes in contact with a body (e.g., tissue, muscle, skin, hair, clothing, etc.) of a user when the user is wearing wearable 100 (though, in some examples, portions of housing 118 may also come in contact with the body of the user). For example, in cases where wearable 100 is a watch, attachment component 116 may be a watch band that fits around a user's wrist and comes in contact with the skin of the user. In examples where wearable 100 is eyewear or headwear, attachment component 116 may be a portion of the frame of the eyewear or headwear that fits around a user's head, and when wearable 100 is a glove, attachment component 116 may be the material of the glove that conforms to the fingers and hand of the user. In some examples, wearable 100 can be grasped and held from housing 118 and/or attachment component 116.
Modules 106 and 108 may perform operations described herein using software, hardware, firmware, or a mixture of hardware, software, and/or firmware residing in and/or executing at wearable 100. Wearable 100 may execute modules 106 and 108 with one or more processors located within housing 118. In some examples, wearable 100 may execute modules 106 and 108 as one or more virtual machines executing on underlying hardware of wearable 100 located within housing 118. Modules 106 and 108 may execute as one or more services or components of operating systems or computing platforms of wearable 100. Modules 106 and 108 may execute as one or more executable programs at application layers of computing platforms of wearable 100. In other examples, motion sensors 102, display 104, and/or modules 106 and 108 may be arranged remotely to housing 118 and be remotely accessible to wearable 100, for instance, via interaction by wearable 100 with one or more network services operating at a network or in a network cloud.
Motion sensors 102 represent one or more motion sensors or input devices configured to detect indications of movement (e.g., data representing movement) associated with wearable 100. Examples of motion sensors 102 include accelerometers, speed sensors, gyroscopes, tilt sensors, barometers, proximity sensors, ambient light sensors, cameras, microphones, or any and all other types of input devices or sensors that can generate data from which wearable device 100 can determine movement.
Motions sensors 102 may generate “raw” motion data when a user of wearable 100 causes attachment component 116 and/or housing 118 to move. For example, as a user twists his or her wrist or moves his or her arm while wearing attachment component 116, motion sensors 102 may output raw motion data (e.g., indicating an amount of movement and a time at which the movement was detected) being generated during the movement to movement detection module 106. The motion data may indicate one or more characteristics of movement including at least one of an acceleration, a level of tilt, a direction, a speed, a degree of rotation, a degree of orientation, or a level of luminance.
In some examples, the motion data generated by motion sensors 102 may be a series of motion vectors. For instance, at time t, a three-axis accelerometer of motion sensors 102 may generate motion vector (Vx, Vy, Vz) where with the Vx value that indicates the acceleration of wearable 100 along an X-axis, the Vy value that indicates the acceleration of wearable 100 along a Y-axis, and the Vz value that indicates the acceleration of wearable 100 along a Z-axis. In some examples, the X-axis and the Y-axis may define a plane substantially parallel to display 104, and the Z-axis may be perpendicular to both the X-axis and the Y-axis. As illustrated in
Movement detection module 106 obtains motion sensor data generated by motion sensors 102 and processes the motion sensor data to identify or otherwise determine what specific types and characteristics of movement are being detected by motion sensors 102. Said differently, movement detection module 106 determines, based on motion sensor data, when, how, and in what direction that wearable 100 is moving. Movement detection module 106 may provide, based on motion data obtained from motion sensors 102, an indication (e.g., data) of when wearable 100 is detected moving in a recognizable, predefined, pattern or profile of movement. For example, movement detection module 106 may alert (e.g., trigger an interrupt, send a message, etc.) UI module 108 when movement detection module 106 identifies motion data obtained from motion sensors 102 that at least approximately corresponds to one or more of predefined movements. Movement detection module 106 may provide to UI module 108, data about the detected movement, for instance, data that defines the particular predefined movement indicated by the motion data.
As described below, UI module 108 may cause wearable 100 to perform one or more operations based on movement detected by movement detection module 106. For example, UI module 108 may alter the presentation of a user interface (e.g., user interfaces 110A and 110B) depending on the predefined movement identified by movement detection module 106. For example, at any particular time, movement detection module 106 may obtain motion sensor data, check the motion sensor data against one or more expected sensor data patterns or profiles that are normally observed by motion sensors 102 when wearable 100 moves in a certain direction, speed, acceleration, etc., and output data to UI module 108 that defines the predefined movement of wearable 100 being recognized from the motion sensor data. UI module 108 may alter the presentation of a user interface depending on the predefined movement identified by movement detection module 106.
Display 104 of wearable 100 may provide output functionality for wearable 100. Display 104 may be implemented using one or more various technologies. For instance, Display 104 may function as an output device using any one or more display devices, such as a liquid crystal display (LCD), a dot matrix display, a light emitting diode (LED) display, an organic light-emitting diode (OLED) display, e-ink, or similar monochrome or color displays capable of outputting visible information to a user of wearable 100. In some examples, display 104 may function as input device using a presence-sensitive input screen, such as a resistive touchscreen, a surface acoustic wave touchscreen, a capacitive touchscreen, a projective capacitance touchscreen, a pressure sensitive screen, an acoustic pulse recognition touchscreen, or another presence-sensitive display technology.
Display 104 may present the output as a graphical user interface, which may be associated with functionality provided by wearable 100. For example, display 104 may present user interfaces 110A and 110B (collectively, “user interfaces 110”). Each of user interfaces 110 may include a current content card of a list of content cards. For instance, in the example of
Each of content cards 114 may be associated with functionality of computing platforms, operating systems, applications, and/or services executing at or accessible by wearable 100 (e.g., notification services, electronic message applications, Internet browser applications, mobile or desktop operating systems, etc.). A user may interact with user interfaces 110 while being presented at display 104 to cause wearable 100 to perform operations relating to the functions.
Content card 114A represents a content card that includes an image of a clock associated with a time or calendar application. Content card 114B may include a photo, video, or other image data associated with a photo or imaging application (e.g., a viewfinder of a camera, a picture or video playback, etc.). Content card 114D represents a content card that includes weather information directed to a weather information services application (e.g., for viewing a forecast, receiving emergency weather alerts, etc.). Content card 114C represents a content card that includes information associated with a text-based messaging service application executing at wearable 100. Content card 114C may include text-based information related to a conversation between a user of wearable 100 and another user of the messaging service. For example, a message account associated with wearable 100 may receive a notification or alert to a message received from a messaging service. Wearable 100 may present the information associated with content card 114C in response to the receipt of the notification. From content card 114C, the user of wearable 100 can view the content associated with the message and compose a reply message. Still many other examples of content cards 114 exist, including media player related content cards, Internet search (e.g., text-based, voice-based, etc.) related content cards, navigation related content cards, and the like.
In some examples, lists of content cards may be at different hierarchical levels and content cards at a particular hierarchical level may correspond to lists of content cards at different hierarchical levels. For instance, list 112 of content cards 114 may be at a first hierarchical level and content card 114C may correspond to a different list of content cards at a lower hierarchical level than list 112. In some examples, the lists of content cards may be referred to as bundles of content cards.
UI module 108 may receive and interpret movements identified by movement detection module 106 (e.g., from motion sensors 102). UI module 108 may cause wearable 100 to perform functions by relaying information about the detected inputs and identified movements to one or more associated platforms, operating systems, applications, and/or services executing at wearable 100.
Responsive to obtaining and relaying information about the identified movements, UI module 108 may receive information and instructions from the one or more associated platforms, operating systems, applications, and/or services executing at wearable 100 for generating and altering a user interface associated with wearable 100 (e.g., user interfaces 110A and 110B). In this way, UI module 108 may act as an intermediary between the one or more associated platforms, operating systems, applications, and/or services executing at wearable 100 and various input and output devices of wearable 100 (e.g., display 104, motion sensors 102, a speaker, a LED indicator, other output devices, etc.) to produce output (e.g., a graphic, a flash of light, a sound, a haptic response, etc.) with wearable 100.
In some examples, UI module 108 may interpret movement data detected by movement detection module 106, and in response to the inputs and/or movement data, cause display 104 to alter the presented user interface. For instance, in one example, a user may cause housing 118 and/or attachment 116 of wearable 100 to move. UI module 108 may alter the user interface presented at display 104 in response to detecting the movement. For example, UI module 108 may cause display 104 to present user interface 110A prior to the movement (i.e., cause display 104 to display content card 114B prior to the movement), and may cause display 104 to present user interface 110B after the movement (i.e., cause display 104 to display content card 114C after to the movement).
UI module 108 may maintain a data store that maintains an association between one or more predefined movements and one or more respective user interface navigation commands for navigating through content cards 114. Some example user interface navigation commands which may be associated with predefined movements include, but are not limited to, a next navigation command to move to a next content card in a current list of content cards, a previous navigation command to move to a previous content card in a current list of content cards, an into navigation command to move into a list of content cards at a lower hierarchical level that corresponds to the current content card, an out navigation command to move into a list of content cards at a higher hierarchical level, and a reset navigation command. In some examples, the next navigation command may be associated with a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination. In some examples, the previous navigation command may be associated with a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination. In some examples, the into navigation command may be associated with a movement that includes a lowering of the forearm of the user away from a head of the user followed by a raising of the forearm of the user toward the head of the user. In some examples, the out navigation command may be associated with a movement that includes a raising of the forearm of the user towards the head of the user followed by a lowering of the forearm of the user away from the head of the user. In some examples, the reset navigation command may be associated with a movement that includes a repeated pronation and supination of the forearm of the user (e.g., two or three cycles of pronation and supination) within a period of time
When UI module 108 determines that one of the predefined movements of wearable 100 has been identified by movement detection module 106, UI module 108 may select the content card of content cards 114 in the corresponding navigation direction. UI module 108 may cause display 104 to present the selected content card of content cards 114. In this way, UI module 108 may enable navigation through content cards in response to, and based on, movement that corresponds to a predefined movement.
In operation, wearable 100 may display a current content card of a list of content cards. For example, UI module 108 may cause display 104 to present user interface 110A which includes content card 114B of list 112 of content cards 114.
In the example of
A motion sensor of wearable 100 may detect movement of wearable 100. For example, one or more motion sensors 102 (e.g., tilt sensors, gyros, accelerometers, etc.) may detect movement of wearable 100 as a user moves (e.g., twists) the part of his or her body that attachment component 116 is attached to, and causes the direction, acceleration, orientation, etc. of housing 118 and/or attachment component 116 to change. Based on the detected movement, motion sensors 102 may generate motion data that defines the detected movement. Movement detection module 106 may obtain the motion data generated by motion sensors 102 while wearable 100 moves.
Movement detection module 106 may compare the movement data obtained from motion sensors 102 to a database or data store of one or more predefined movements. Movement detection module 106 may determine that the motion sensor data matches or otherwise correlates to a particular movement of wearable 100 when a user of wearable 100 waves, twists, shakes, or otherwise moves the arm or wrist that attachment component 116 is fastened to. For instance, movement detection module 106 may determine that the motion sensor data indicates a change in speed, acceleration, direction, rotation, or other characteristic of movement that corresponds to the movement of wearable 100 when a person twists his or her arm or wrist in a certain way. Movement detection module 106 may output an indication (e.g., data) to UI module 108 that alerts UI module 108 as to which of the predefined movements the motion sensor data corresponds. In the example of
Responsive to determining that the movement of wearable 100 corresponds to a predefined movement, UI module 108 may alter the presented user interface based on the predefined movement. For instance, UI module 108 may determine which navigation command is associated with the predefined movement, select a content card based on the determined navigation command, and cause display 104 to present the selected content card. In the example of
In this manner, wearable may enable a user to more quickly and easily view different content cards 114 by moving wearable 100 in a certain way. By providing certain, easy-to-perform movements while wearing wearable 100, that require less focus or control, than other types of inputs, a wearable such as wearable 100 may enable a user to more quickly and intuitively navigate through a visual stack of content cards, even if the user is immersed in other activities that demand much of the user's attention or focus.
In some examples, the techniques of this disclosure may enable a user to perform operations other than navigating through content cards. As one example, where wearable 100 is configured to perform media (e.g., music, video, etc.) playback, the next navigation command may cause wearable 100 to advance to a next media element (e.g., a next song) and the previous navigation command may cause wearable 100 to return to a previous media element (e.g., a previous song). In some of such examples, the into and out navigation commands may cause wearable 100 to adjust the functions of the next and previous navigation commands. For instance, a first into navigation command may cause wearable 100 to adjust the functions of the next and previous navigation commands such that the next navigation command fast-forwards a currently playing media element and the previous navigation command rewinds the currently playing media element. Similarly, a second into navigation command may cause wearable 100 to adjust the functions of the next and previous navigation commands such that the next navigation command increases the playback volume of a currently playing media element and the previous navigation command decreases the playback volume of the currently playing media element.
Unlike other types of wearable devices that rely primarily on speech, touch, or other types of input, a wearable configured in accordance with the techniques of this disclosure may enable a user to easily navigate through content cards, even if the user is using his or her hands to perform some other action that is unrelated to the navigation of the content cards (e.g., cooking, bicycling, standing in line at an airport, etc.) or otherwise makes providing voice commands or touch inputs difficult. Because the wearable may enable a user to more easily navigate through content cards through simple movements, the wearable according to these techniques may receive fewer false or incorrect touch or spoken inputs. By processing fewer false or incorrect inputs, the techniques may enable a wearable to perform fewer operations and conserve electrical (e.g. battery) power.
As shown in the example of
Application processors 222, in one example, are configured to implement functionality and/or process instructions for execution within computing device 200. For example, application processors 222 may be capable of processing instructions stored in storage device 240. Examples of processors application 222 may include, any one or more of a microprocessor, a controller, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or equivalent discrete or integrated logic circuitry.
One or more storage devices 240 may be configured to store information within computing device 200 during operation. Storage device 240, in some examples, is described as a computer-readable storage medium. In some examples, storage device 240 is a temporary memory, meaning that a primary purpose of storage device 240 is not long-term storage. Storage device 240, in some examples, is described as a volatile memory, meaning that storage device 240 does not maintain stored contents when the computing device is turned off. Examples of volatile memories include random access memories (RAM), dynamic random access memories (DRAM), static random access memories (SRAM), and other forms of volatile memories known in the art. In some examples, storage device 240 is used to store program instructions for execution by processors 222. Storage device 240, in one example, is used by software or applications running on computing device 200 (e.g., application modules 244) to temporarily store information during program execution.
Storage devices 240, in some examples, also include one or more computer-readable storage media. Storage devices 240 may be configured to store larger amounts of information than volatile memory. Storage devices 240 may further be configured for long-term storage of information. In some examples, storage devices 240 include non-volatile storage elements. Examples of such non-volatile storage elements include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories.
One or more input components 238 of computing device 200 may receive input. Examples of input are tactile, audio, and video input. Input components 238 of computing device 200, in one example, includes a presence-sensitive display, touch-sensitive screen, mouse, keyboard, joystick, physical button/switch, voice responsive system, camera, microphone or any other type of device for detecting input from a human or machine.
As illustrated in
In some examples, in addition to motion sensors 202, input components 238 may include one or more other sensors, such as one or more location sensors (e.g., a global positioning system (GPS) sensor, an indoor positioning sensor, or the like), one or more light sensors, one or more temperature sensors, one or more pressure (or grip) sensors, one or more physical switches, one or more proximity sensors, and one or more bio-sensors that can measure properties of the skin/blood, such as oxygen saturation, pulse, alcohol, blood sugar etc.
One or more output components 226 of computing device 200 may generate output. Examples of output are tactile, audio, and video output. Output components 226 of computing device 200, in one example, includes a presence-sensitive display, sound card, video graphics adapter card, speaker, electronic display, or any other type of device for generating output to a human or machine. The electronic display may be an LCD or OLED part of a touch screen, may be a non-touchscreen direct view display component such as a CRT, LED, LCD, or OLED. The display component may also be a projector instead of a direct view display.
Presence-sensitive display 228 of computing device 200 includes display component 204 and presence-sensitive input component 230. Display component 204 may be a screen at which information is displayed by presence-sensitive display 228 and presence-sensitive input component 230 may detect an object at and/or near display component 204. As one example range, a presence-sensitive input component 230 may detect an object, such as a finger or stylus that is within 2 inches (˜5.08 centimeters) or less from display component 204. Presence-sensitive input component 230 may determine a location (e.g., an (x,y) coordinate) of display component 204 at which the object was detected. In another example range, presence-sensitive input component 230 may detect an object 6 inches (˜15.24 centimeters) or less from display component 204 and other exemplary ranges are also possible. Presence-sensitive input component 230 may determine the location of display component 204 selected by a user's finger using capacitive, inductive, and/or optical recognition techniques. In some examples, presence sensitive input component 230 also provides output to a user using tactile, audio, or video stimuli as described with respect to display component 204. In the example of
While illustrated as an internal component of computing device 200, presence-sensitive display 228 may also represent and external component that shares a data path with computing device 200 for transmitting and/or receiving input and output. For instance, in one example, presence-sensitive display 228 represents a built-in component of computing device 200 located within and physically connected to the external packaging of computing device 200 (e.g., a screen on a mobile phone). In another example, presence-sensitive display 228 represents an external component of computing device 200 located outside and physically separated from the packaging of computing device 200 (e.g., a monitor, a projector, etc. that shares a wired and/or wireless data path with a tablet computer).
Battery 232 may provide power to one or more components of wearable computing device 200. Examples of battery 232 may include, but are not necessarily limited to, batteries having zinc-carbon, lead-acid, nickel cadmium (NiCd), nickel metal hydride (NIMH), lithium ion (Li-ion), and/or lithium ion polymer (Li-ion polymer) chemistries. Battery 232 may have a limited capacity (e.g., 1000-3000 mAh).
In some examples, wearable 200 may include SCC 234. SCC 234 may communicate with one or more of input components 238, such as motion sensors 202. In some examples, SCC 234 may be referred to as a “sensor hub” that operates as an input/output controller for one or more of input components 238. For example, SCC 234 may exchange data with one or more of input components 238, such as motion data corresponding to wearable 200. SCC 238 may also communicate with application processors 222. In some examples, SCC 238 may use less power than application processors 222. As one example, in operation, SCC 238 may use power in a range of 20-200 mW. In some examples, SCC 238 may be referred to as a digital signal processor (DSP) or advanced DSP (ADSP) that operates as an input/output controller for one or more of input components 238. As illustrated in the example of
Computing device 200 may include operating system 246. Operating system 246, in some examples, controls the operation of components of computing device 200. For example, operating system 246, in one example, facilitates the communication of movement detection module 206, UI module 208, application modules 244, and gesture library 248 with processors 222, output components 226, presence-sensitive display 228, SCC 234, and input components 238. One or more components of storage devices 240 may include program instructions and/or data that are executable by computing device 200. As one example, movement detection module 206 and UI module 208 may include instructions that cause computing device 200 to perform one or more of the operations and actions described in the present disclosure. In some examples, one or more of the components illustrated in storage device 240 may be implemented in hardware and/or a combination of software and hardware.
One or more application modules 244 may provide graphical information and instructions to UI module 208 that UI module 208 includes as content or information contained in a graphical representation of content cards, such as content cards 114 of
Movement detection module 206 may be executable to perform functionality similar to movement detection module 106 of
Data ingestion module 249 may be executable to read and process motion data generated by motion sensors 202. In some examples, data ingestion module 249 may utilize a synchronized circular buffer to store the motion data. Further details of examples of data ingestion module 249 are discussed below with reference to
Segmentation module 250 may be executable to determine one or more segments of motion data for further analysis. Segmentation module 250 may determine a segment of motion data as a series of values of motion data that have one or more properties. Details of an example segmentation process that may be performed by segmentation module 250 are discussed below with reference to
Transform module 252 may be executable to transform motion data between different coordinate systems. For instance, transform module 252 may convert motion data from a first coordinate system to a second coordinate system. In some examples, the first coordinate system may define the orientation of wearable 200 relative to the gravity vector and the second coordinate system may define the orientation of wearable 200 relative to a task-specific orientation. For instance, the second coordinate system may utilize the tilt orientation of wearable 200 (i.e., the orientation of wearable 200 during user interactions) as the task-specific orientation. In any case, transform module 252 may output the converted motion vectors to one or more other components of wearable 200, such as feature module 254. Details of an example transformation process that may be performed by transform module 252 are discussed below with reference to
Feature module 254 may be executable to determine one or more features of a segment of motion data. For instance, feature module 254 may determine one or more features of a segment of motion data determined by segmentation module 250. In some examples, the features determined by feature module 245 may be different types of features. For instance, feature module 254 may determine critical-point features, temporal histograms, cross-channel statistics, per-channel statistics, and basic signal properties. In some examples, feature module 254 may determine the features of a segment using untransformed motion data (i.e., motion data in the first coordinate system). In some examples, feature module 254 may determine the features of a segment using transformed motion data (i.e., motion data in the second coordinate system). In some examples, feature module 254 may determine the features of a segment using a combination of untransformed and transformed motion data. Feature module 254 may output an indication of the determined features to one or more other components of wearable 200, such as classification module 256.
As discussed above, in some examples, feature module 254 may determine critical point features for a segment of motion data (i.e., a sequence of motion vectors [m1, m2, . . . , mn], referred to below as the signal). In some examples, feature module 254 may convolve the signal with a low-pass filter of small kernel size (e.g., with a width of four to five measurements) to generate a filtered signal. This convolution may eliminate or reduce the amount of high frequency noise in the signal. Feature module 254 may determine, in the filtered signal, one or more critical points, and determine one or more properties based on the determined prominent maximums and prominent minimums. The one or more critical points may include one or more prominent maximums and/or one or more prominent minimums.
To determine the one or more prominent maximums, feature module 254 may determine all points in the filtered signal that satisfy the following definition: (Prominent maximum) M is a prominent maximum in the signal for a prominence threshold T if and only if two conditions are satisfied. The first condition that must be satisfied in order to M to be a prominent maximum is that M is a local maximum of the filtered signal. The second condition that must be satisfied in order to M to be a prominent maximum is that there is no other local maximum alt in the filtered signal such that: (i) value(M_alt) is greater than value(M) (i.e., value(M_alt)>value(M)) and (ii) there is no local minimum m in the signal between M_alt and M such that value(M) minus value(m) is greater than or equal to T (i.e., value(M)−value(M)>=T).
To determine the one or more prominent minimums, feature module 254 may determine all points in the filtered signal that satisfy the following definition: (Prominent minimum) m is a prominent minimum in the signal for the prominence threshold T if and only if two conditions are satisfied. The first condition that must be satisfied in order to M to be a prominent minimum is that m is a local minimum of the signal. The second condition that must be satisfied in order to M to be a prominent minimum is that M=there is no other local minimum m_alt in the filtered signal such that: (i) value(m_alt) is less than value(m) (i.e., value(m_alt)<value(m)) and (ii) there is no local maximum M in the signal between m_alt and m such that value(M) minus value(m) is greater than or equal to T (i.e., value(M)−value(M)>=T).
Feature module 254 may determine one or more properties based on the determined prominent maximums and prominent minimums. As one example, feature module 254 may determine a number of prominent maxima in the A-axis of the transformed motion data (i.e., the (A,U,V signal). As another example, feature module 254 may determine a number of prominent maxima in the magnitude of the untransformed motion data (i.e., the X,Y,Z signal). As another example, feature module 254 may determine a number of prominent maxima in each channel of the one of the untransformed motion data (i.e., each one of the X, Y, and Z channels). As another example, feature module 254 may determine a number of prominent minima in each channel of the one of the untransformed motion data (i.e., each one of the X, Y, and Z channels). As another example, feature module 254 may determine a four-bin histogram of orientations of prominent maxima in the A-axis of the transformed motion data, where each orientation is the angle of the transformed motion data in the U-V plane, and each “vote” on the histogram is weighted by the value of the A coordinate. As another example, feature module 254 may determine a four-bin histogram of values of prominent maxima in the magnitude of the untransformed motion data (i.e., the X,Y,Z signal). As another example, feature module 254 may determine a four-bin histogram of differences between consecutive prominent maxima in the magnitude of the untransformed motion data (i.e., the X,Y,Z signal). Feature module 254 may concatenate the resulting values for the one or more properties into a multidimensional feature vector (e.g., a 20-dimensional feature vector). In this way, feature module 254 may determine critical-point features of a segment of motion data.
As discussed above, in some examples, feature module 254 may determine temporal histograms for a segment of motion data. In some examples, feature module 254 may determine the temporal histograms based on unfiltered transformed motion data (i.e., the A,U,V signal). Each bin of each temporal histogram may cover one-fifth of the temporal interval of a candidate segment (i.e., there is a bin for the first fifth, another bin for the second fifth, and so on) and each of these bins may accumulate the values of all measurements that are contained in its temporal interval. For instance, feature module 254 may compute the following 5-bin histogram from the A,U,V signal: values on the A channel, values on the U channel, values on the V channel, first-order (temporal) derivatives of values on the A channel, first-order (temporal) derivatives of values on the U channel, and first-order (temporal) derivatives of values on the V channel. Feature module 254 may accumulate the resulting values on the bins of these histograms and concatenate the accumulated values into a feature vector (e.g., a 30-dimensional feature vector). In this way, feature module 254 may determine temporal histograms for a segment of motion data.
As discussed above, in some examples, feature module 254 may determine the cross-channel statistics for a segment of motion data. In some examples, feature module 254 may determine cross-channel statistics based on unfiltered untransformed motion data (i.e., the X,Y,Z signal). For instance, for each pair of distinct channels C1 and C2 (i.e., C1=X, C2=Y; C1=Y, C2=Z; and C1=Z, C2=X), feature module 254 may determine the cross-channel statistics by computing the correlation between the time series of C1 and C2 measurements, and the Euclidean (RMS) distance between the vectors of C1 and C2 measurements. Feature module 254 may concatenate the resulting values of these properties into a feature vector (e.g., a 6-dimensional feature vector). In this way, feature module 254 may determine cross-channel statistics of a segment of motion data.
As discussed above, in some examples, feature module 254 may determine per-channel statistics for a segment of motion data. In some examples, feature module 254 may determine the per-channel statistics based on unfiltered untransformed motion data (i.e., the X,Y,Z signal). For instance, for each channel (X, Y, and Z), feature module 254 may compute the one or more properties within the segment. As one example, feature module 254 may compute the maximum value of the signal within the segment. As one example, feature module 254 may compute the minimum value of the signal within the segment. Feature module 254 may concatenate the resulting values of these properties into a feature vector (e.g., a 6-dimensional feature vector). In this way, feature module 254 may determine per-channel statistics of a segment of motion data.
As discussed above, in some examples, feature module 254 may determine basic signal properties for a segment of motion data. As one example, feature module 254 may determine the near orientation of a segment (i.e., a coordinate and normalized time of measurement closest to z_t). As another example, feature module 254 may determine the far orientation of a segment (i.e., a coordinate and normalized time of measurement furthest from z_t). As another example, feature module 254 may determine the polarity of a segment (i.e., +1 if movement is mostly from Near to Far orientation, −1 otherwise). As another example, feature module 254 may determine the azimuth of a segment (i.e., direction of segment's temporal derivative in its Near endpoint, with segment traced from Near point (regardless of actual polarity)). In some examples, feature module 254 based the determination of the azimuth of a segment on a pre-defined linear combination of the temporal derivative directions along the entire segment, with a possible bias toward the Near point. As another example, feature module 254 may determine the amplitude of a segment (i.e., geodesic distance between first and last measurements in a segment). As another example, feature module 254 may determine the duration of a segment (i.e., temporal distance between first and last measurements in a segment). Feature module 254 may concatenate the resulting values of these properties into a feature vector (e.g., a 10-dimensional feature vector). In this way, feature module 254 may determine basic signal properties of a segment of motion data.
Classification module 256 may be executable to classify segments of motion data into a category (e.g., a predefined movement). For instance, classification module 256 may use an inference model to classify a segment of motion data into a category based on respective corresponding feature vectors received from feature module 254. Classification module 256 may use any type of classifier to classify segments of motion data. Some example classifiers that classification module 256 may use include, but are not limited to, SimpleLogistic and Support Vector Machines (SVM).
SimpleLogistic method is built upon multinomial logistic regression. Multinomial logistic regression models posterior probability of classes with linear functions of features through a softmax normalization. Some logistic regression training methods utilize the entire feature set to get the optimal parameters. But, SimpleLogistic method may add one feature at a time. In each iteration, the model built with previously selected features is used to get the current error in estimation of posterior probability of the classes. The next feature to add to the model may be the one that best predicts this error through a linear regression model. Likewise, the residual error may be minimized by adding the another feature. The optimal number of features are obtained based on cross-validation. Since not all features are selected in the final model, SimpleLogistc may result in a sparse model (similar to regularization effect) and yield a more robust model with given large feature set. In some examples, the model used for SimpleLogistic may be stored in gesture library 248.
SVMs are powerful linear classifiers that maximize the margin between two different classes. SVMs can be extended to nonlinear cases using the kernel trick, which is implicit mapping of data to higher dimensional spaces where the classes can be linearly separated. In some examples, the RBF kernel for nonlinear SVMs may be used. Since there are multiple classes, a onevsone strategy may be employed to train the SVM. In this strategy, C*(C1)/2 SVM classifiers may be trained for every possible pair of classes and at test time the class with the majority of votes is selected. The SVM is tested on the dataset collected from wearables worn by a set of subjects. The groundtruth labels were obtained by a set of experts who labeled the data by looking at the accelerometer signal. In some examples, SVMs may outperform SimpleLogistic by 2% at the cost of adding 50 ms to the latency. In some examples, the trained SVM data may be stored in gesture library 248.
Regardless of the classifier used, classification module 256 may output the category for the segment to one or more other components of wearable 200, such as UI module 208. In this way, classification module 256 may classify segments of motion data into a category.
UI module 208 may perform operations similar to UI module 108 of
In some examples, movement detection module 206 may be executed by application processors 222. However, as discussed above, in some examples, it may be advantageous to for SCC 234 to perform one or more operations described above as being performed by movement detection module 206. For instance, movement detection module 206 may have a significant impact on battery life when executing on application processors 222. As such, in some examples where movement detection module 206 is executed by application processors 222 (V1), gesture/movement recognition may be enabled for applications running in the foreground or in AmbiActive mode. By contrast, in some examples where one or more operations described above as being performed by movement detection module 206 are performed by SCC 234 (V2), gesture/movement recognition may be enabled for applications running in the foreground or in AmbiActive mode and applications not running in the foreground or in AmbiActive mode.
In some examples, it may be desirable to selectively control which applications have the ability to perform gesture detection in the background (e.g., to prevent accidental battery draining). For instance, in some wearables that do not support performing gesture detection operations on SCC 234, it may be desirable to prevent applications from performing gesture detection in the background. A proposed way to achieve that balance is as follows: a WristGestureManager may accept subscriptions from multiple applications. By default, applications may be notified about gestures only when they are running on foreground. On the subscription call, each of the applications may (optionally) specify if it wishes to receive gesture notifications in each one of a set of special cases. One example special case is when the application is running on AmbiActive mode. Another example special case is when the application is running on background, regardless of whether there is another application on foreground or on AmbiActive mode, or the screen is off. In any case, on the subscription reply, the WristGestureManager may grant or deny these special case requests depending on power characteristics of the current gesture detection implementation on the device.
In some examples, in order to implement both the mechanisms for V1 and for V2, the WristGestureManager may monitor the state of each registered app through the ActivityManagerService and automatically disable gesture detection as soon as none of the registered apps is in a state where it needs to be notified about wrist gestures. In cases where apps only use gestures when they are running on foreground or on AmbiActive modes (V1), there may not be a need for arbitration since at any instant there is at most one application that must be notified about gestures. However, arbitration may become an issue when applications running on background can be controlled by wrist gestures (V2). In such cases, one or more arbitration rules may be used to arbitrate between applications. If an application that currently subscribes to gestures is running in foreground or AmbiActive, then only that application receives gesture notifications. Otherwise, only the application among those subscribing to on-background gestures that was most recently on active or AmbiActive modes may receive gesture notifications.
While the relative motion of the movement in
As such, in the example of
UI module 108/208 may enable the user to navigate through the content cards based on the determined movement. For instance, in response to determining that one of the predefined movements of wearable 400/500 has been identified by movement detection module 106/206, UI module 108/208 may select the content card in the corresponding navigation direction. In the example of
In the example of
UI module 108/208 may enable the user to navigate through the content cards based on the determined movement. For instance, in response to determining that one of the predefined movements of wearable 600/700 has been identified by movement detection module 106/206, UI module 108/208 may select the content card in the corresponding navigation direction. In the example of
When called (e.g., by UI module 208), data ingestion module 249 may begin reading motion data 802 from motion sensors 202. Data ingestion module 249 may execute as a part of a main thread of movement detection module 206 and a background thread of movement detection module 206. The portions of data ingestion module 249 that execute as part of the main thread may write motion data 802 to synchronized circular buffer 804 and the portions of data ingestion module 249 that execute as part of the background thread may read the data from circular buffer 804.
In according with one or more techniques of this disclosure, one or more optimizations may be made to reduce the amount of power consumed by data ingestion module 249. For example, data ingestion module 249 may read the motion data in the batching mode. As another example, the background thread may not be run constantly. After the background thread is done processing one buffer read, the background thread may go to “sleep” (i.e., to reduce the amount of power consumed). The background thread may wake-up only when new data arrives that is fresher than the already processed data. However, further optimization may be possible. In particular, in examples where the background thread reads the whole circular buffer and processes all the data, such techniques may results in a repeated calculation on almost 90% of the data since only 10% of the data is new for every batch of sensor measurement coming in. Thus, there may be opportunities to process a sub-set of the circular buffer and/or process the entire circular buffer only at certain time periods or after a certain amount of new sensor data has arrived.
In accordance with one or more techniques of this disclosure, data ingestion module 249 may separate the writing and reading circular buffers such that the gesture detection is run only on new data. For instance, as opposed to using single synchronized circular buffer 804 of
As discussed above, segmentation module 250 of wearable 200 may determine a segment of motion data as a series of values of motion data that have one or more properties. A first example property of a segment is that the amount of variation in measured values of raw motion data (e.g., raw accelerometer data) on y-axis is high. A second example property is that a segment starts in tilt orientation (i.e., the range of values that indicate the user is viewing display component 204) and ends in tilt orientation. A third example property is that each segment has a temporal duration that is between a predefined minimum duration and a predefined maximum duration. Based on one or more of the above identified properties, in some examples, segmentation module 250 may determine one or more segments of motion data by searching for a point within the motion data that has a high standard deviation on the y-axis (i.e., to satisfy the first example property). If the point that has the high standard deviation on the y-axis is within a certain range of the value at tilt orientation (i.e., to satisfy the second example property), segmentation module 250 may assign the point as a possible segment start index and may search for a segment end index. In some examples, the end index may be a point on the motion data (temporally after the start index) with low standard deviation (i.e., to satisfy the first example property). A point is assigned to be the segment end point if the point is in tilt orientation (i.e., to satisfy the second example property).
In the example of
In some examples, the data points (motion vectors) near the end of the segments had little impact on feature detection, and therefore gesture detection. As such, in accordance with one or more techniques of this disclosure, segmentation module 250 may determine segments that end before the true segment ending. For instance, if segmentation module 250 ends the segments 20% to 25% before what was labelled as true segment ending, a gain on latency may be achieved without any compromise on quality. For instance, segmentation module 250 may determine the same start points for the segments but determine end points that are 20% to 20% earlier. In this way, the techniques of this disclosure may reduce the amount of time needed to detect gestures/movements.
In accordance with one or more techniques of this disclosure, a wearable computing device, such as wearable 200, may convert motion data from a first coordinate system into a second, task-specific, coordinate system. As one example, transform module 252 may convert motion data generated by motion sensors 202 into a gaze-centric coordinate system. The vector z_t may be defined as the typical orientation of gravity vector G while a user is interacting with wearable computing device 200 (i.e., while the user is “gazing” at a display of wearable computing device 200). Based on z_t, the vectors x_t and y_t may be defined. For instance, the vector x_t may be defined by projecting the X axis onto a plane orthogonal to z_t (circle 1166 may be a circle of unit length on the plane centered at x_t=y_t=z_t=0), and the vector y_t may be selected to be a vector orthogonal to z_t and x_t (e.g., such that x_t, y_t, and z_t form a right-handed orthonormal system).
In operation, transform module 252 may convert motion vectors including x,y,z values (corresponding to the X, Y, and Z axes) into u,v coordinates. Transform module 252 may normalize the x,y,z values of a motion vector into unit length to determine motion vector m. Transform module 252 may determine vector motion vector m_p by projecting motion vector m on to plane 1165 and extending the result to unit length (i.e., to intersect with circle 1166). Transform module 252 may determine u′, an intermediate value for the u coordinate, by projecting motion vector m_p onto x_t (i.e., u′=m_p·x_t), and v′, an intermediate value for the v coordinate, by projecting motion vector m_p onto y_t (i.e., v′=m_p·y_t). As illustrated in
As shown in the example of
In other examples, such as illustrated previously by wearable 100 in
Presence-sensitive display 1228, like presence-sensitive display 228 as shown in
As shown in
Projector screen 1270, in some examples, may include a presence-sensitive display 1273. Presence-sensitive display 1273 may include a subset of functionality or all of the functionality of presence-sensitive display 1228 as described in this disclosure. In some examples, presence-sensitive display 1273 may include additional functionality. Projector screen 1270 (e.g., an electronic whiteboard), may receive data from wearable 1200 and display the graphical content. In some examples, presence-sensitive display 1273 may determine one or more user inputs (e.g., continuous gestures, multi-touch gestures, single-touch gestures, etc.) at projector screen 1270 using capacitive, inductive, and/or optical recognition techniques and send indications of such user input using one or more communication units to wearable 1200.
As described above, in some examples, wearable 1200 may output graphical content for display at presence-sensitive display 1228 that is coupled to wearable 1200 by a system bus or other suitable communication channel. Wearable 1200 may also output graphical content for display at one or more remote devices, such as projector 1269, projector screen 1270, mobile device 1271, and visual display device 1272. For instance, wearable 1200 may execute one or more instructions to generate and/or modify graphical content in accordance with techniques of the present disclosure. Wearable 1200 may output the data that includes the graphical content to a communication unit of wearable 1200, such as communication unit 1258. Communication unit 1258 may send the data to one or more of the remote devices, such as projector 1269, projector screen 1270, mobile device 1271, and/or visual display device 1272. In this way, wearable 1200 may output the graphical content for display at one or more of the remote devices. In some examples, one or more of the remote devices may output the graphical content at a presence-sensitive display that is included in and/or operatively coupled to the respective remote devices.
In some examples, wearable 1200 may not output graphical content at presence-sensitive display 1228 that is operatively coupled to wearable 1200. In other examples, wearable 1200 may output graphical content for display at both a presence-sensitive display 1228 that is coupled to wearable 1200 by communication channel 1268A, and at one or more remote devices. In such examples, the graphical content may be displayed substantially contemporaneously at each respective device. For instance, some delay may be introduced by the communication latency to send the data that includes the graphical content to the remote device. In some examples, graphical content generated by wearable 1200 and output for display at presence-sensitive display 1228 may be different than graphical content display output for display at one or more remote devices.
Wearable 1200 may send and receive data using any suitable communication techniques. For example, wearable 1200 may be operatively coupled to external network 1276 using network link 1277A. Each of the remote devices illustrated in
In some examples, wearable 1200 may be operatively coupled to one or more of the remote devices included in
In accordance with techniques of the disclosure, wearable 1200 may be operatively coupled to mobile device 1271 using external network 1276. Wearable 1200 may output for display at presence-sensitive display 1275, a content card of a list of content cards. For instance, wearable 1200 may send data that includes a representation of the content card to communication unit 1258. Communication unit 1258 may send the data that includes the representation of the content card to mobile device 1271 using external network 1276. Mobile device 1271, in response to receiving the data using external network 1276, may cause presence-sensitive display 1274 to output the content card.
As discussed above, wearable 1200 may enable a user to navigate through content cards by performing one or more gestures. In response to determining that the user of wearable 1200 has performed a gesture to move to a next content card, wearable 1200 may output for display at presence-sensitive display 1275, a next content card of the list of content cards. For instance, wearable 1200 may send data that includes a representation of the next content card to communication unit 1258. Communication unit 1258 may send the data that includes the representation of the next content card to mobile device 1271 using external network 1276. Mobile device 1271, in response to receiving the data using external network 1276, may cause presence-sensitive display 1274 to output the next content card.
In accordance with one or more techniques of the disclosure, a display of wearable 100 may display (1302) a content card of a list of content cards. For instance, display 104 may present user interface 110A that includes content card 114B of list 112 of content cards 114.
Wearable 100 may receive (1304) motion data that represents motion of a forearm of a user of wearable 100. For instance, one or more of motion sensors 102 (e.g., an accelerometer) may generate, and movement detection module 106 may receive, a plurality of motion vectors that each indicate a respective acceleration value for an X-axis, a Y-axis, and a Z-axis.
Wearable 100 may analyze (1306) the received motion data. Wearable 100 may determine whether (1308) the user has performed a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination. In response to determining that the user has performed a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination (“Yes” branch of 1308), wearable 100 may display a next content card of the list of content cards. For instance, display 104 may present user interface 110B that includes content card 114C of list 112 of content cards 114.
Wearable 100 may determine whether (1312) the user has performed a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination. In response to determining that the user has performed a movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination (“Yes” branch of 1312), wearable 100 may display a previous content card of the list of content cards.
The following numbered examples may illustrated one or more aspects of the present disclosure.
Example 1A method comprising: displaying, by a display of a wearable computing device, a content card of a list of content cards; receiving, by the wearable computing device, motion data generated by a motion sensor of the wearable computing device that represents motion of a forearm of a user of the wearable computing device; in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a first movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card of the list of content cards; and in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a second movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card of the list of content cards.
Example 2The method of example 1, wherein the list of content cards is at a current hierarchical level of a plurality of hierarchical levels, and wherein the current content card corresponds to a list of content cards at a lower hierarchical level of the plurality of hierarchical levels than the current hierarchical level, the method further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a third movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, displaying, by the display, a content card of the list of content cards at the lower hierarchical level.
Example 3The method of any combination of examples 1-2, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a fourth movement that includes a raising of at least a distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, displaying, by the display, a content card of a list of content cards at a higher hierarchical level of the plurality of hierarchical levels than the current hierarchical level.
Example 4The method of any combination of examples 1-3, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a fifth movement that includes a repeated pronation and supination of the forearm of the user within a period of time, displaying, by the display, a home screen.
Example 5The method of any combination of examples 1-4, wherein the home screen is a content card of the list of content cards that is not the next content card, the previous content card, or a currently displayed content card.
Example 6A wearable computing device configured to be worn on a forearm of a user, the wearable computing device comprising; a display component that displays content cards; at least one motion sensor that detects movement of the wearable computing device and generates, based on the movement, motion data that represents motion of the forearm of the user of the wearable computing device; one or more processors; at least one module operable by the one or more processors to: cause the display component to display a first content card of a list of content cards; responsive to determining that the user of the wearable computing device has performed a first gesture that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination, output, for display by the display component, a second content card of the list of content cards; and responsive to determining, based on the motion data, that the user of the wearable computing device has performed a second gesture that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination, output, for display by the display component, the first content card.
Example 7The wearable computing device of example 6, wherein the first content card corresponds to a current hierarchical level of a plurality of hierarchical levels, and wherein, responsive to determining, based on the motion data, that the user of the wearable computing device has performed a third movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, the at least one module is further operable to output, for display by the display component, a third content card from a lower hierarchical level than the current hierarchical level.
Example 8The wearable computing device of any combination of examples 6-7, wherein, in response to determining, based on the motion data, that the user of the wearable computing device has performed a fourth movement that includes a raising of at least a distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, the at least one module is further operable to output, for display at the display component, a fourth content card from a higher hierarchical level than the current hierarchical level.
Example 9The wearable computing device of any combination of examples 6-8, wherein, in response to determining, based on the motion data, that the user of the wearable computing device has performed a fifth movement that includes a repeated pronation and supination of the forearm of the user within a period of time, the at least one module is further operable to output, for display at the display component, a home screen.
Example 10The wearable computing device of any combination of examples 6-9, wherein the home screen is a content card of the list of content cards that is not the next content card, the previous content card, or a currently displayed content card.
Example 11A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a wearable computing device to: output for display, by a display of a wearable computing device, a content card of a list of content cards; receive motion data generated by a motion sensor of the wearable computing device that represents motion of a forearm of a user of the wearable computing device; responsive to determining, based on the motion data, that the user of the wearable computing device has performed a first movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination, output for display, by the display component, a next content card of the list of content cards; and responsive to determining, based on the motion data, that the user of the wearable computing device has performed a second movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination, output for display, by the display component, a previous content card of the list of content cards.
Example 12The computer-readable storage medium of example 11, wherein the list of content cards is at a current hierarchical level of a plurality of hierarchical levels, the computer-readable storage medium further comprising instructions that cause the one or more processors to: responsive to determining, based on the motion data, that the user of the wearable computing device has performed a third movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, output for display, by the display component, a content card of the list of content cards at a lower hierarchical level of the plurality of hierarchical levels than the current hierarchical level.
Example 13The computer-readable storage medium of any combination of examples 12-13, further comprising instructions that cause the one or more processors to: responsive to determining, based on the motion data, that the user of the wearable computing device has performed a fourth movement that includes a raising of at least the distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, output for display, by the display component, a content card of a list of content cards at a higher hierarchical level of the plurality of hierarchical levels than the current hierarchical level.
Example 14The computer-readable storage medium of any combination of examples 12-14, further comprising instructions that cause the one or more processors to: responsive to determining, based on the motion data, that the user of the wearable computing device has performed a fifth movement that includes a repeated pronation and supination of the forearm of the user within a period of time, output for display, by the display component, a home screen.
Example 15The computer-readable storage medium of any combination of examples 12-15, wherein the home screen is a content card of the list of content cards that is not the next content card, the previous content card, or a currently displayed content card.
Example 16A method comprising: displaying, by a display of a wearable computing device, a content card of a list of content cards at a current hierarchical level of a plurality of hierarchical levels; receiving, by the wearable computing device, motion data generated by a motion sensor of the wearable computing device that represents motion of a forearm of a user of the wearable computing device; in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a first movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, displaying, by the display, a content card of the list of content cards at a lower hierarchical level of the plurality of hierarchical levels than the current hierarchical level.
Example 17The method of example 16, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a second movement that includes a raising of at least the distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, displaying, by the display, a content card of a list of content cards at a higher hierarchical level of the plurality of hierarchical levels than the current hierarchical level.
Example 18The method of any combination of examples 16-17, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a third movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is less than an acceleration of the supination, displaying, by the display, a next content card of the list of content cards.
Example 19The method of any combination of examples 16-18, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a fourth movement that includes a supination of the forearm of the user followed by a pronation of the forearm of the user at an acceleration that is greater than an acceleration of the supination, displaying, by the display, a previous content card of the list of content cards.
Example 20The method of any combination of examples 16-19, further comprising: in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a fifth movement that includes a repeated pronation and supination of the forearm of the user within a period of time, displaying, by the display, a home screen.
Example 21A wearable computing device comprising means for performing any combination of the method of examples 1-5 or examples 16-20.
Example 22A wearable computing device configured to be worn on a forearm of a user, the wearable computing device comprising; a display component that displays content cards; at least one motion sensor that detects movement of the wearable computing device and generates, based on the movement, motion data that represents motion of the forearm of the user of the wearable computing device; one or more processors configured to perform any combination of the method of examples 1-5 or examples 16-20.
Example 23A computer-readable storage medium comprising instructions that, when executed, cause one or more processors of a wearable computing device to perform any combination of the method of examples 1-5 or examples 16-20.
The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware, or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit including hardware may also perform one or more of the techniques of this disclosure.
Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various techniques described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware, firmware, or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware, firmware, or software components, or integrated within common or separate hardware, firmware, or software components.
The techniques described in this disclosure may also be embodied or encoded in an article of manufacture including a computer-readable storage medium encoded with instructions. Instructions embedded or encoded in an article of manufacture including a computer-readable storage medium encoded, may cause one or more programmable processors, or other processors, to implement one or more of the techniques described herein, such as when instructions included or encoded in the computer-readable storage medium are executed by the one or more processors. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a compact disc ROM (CD-ROM), a floppy disk, a cassette, magnetic media, optical media, or other computer readable media. In some examples, an article of manufacture may include one or more computer-readable storage media.
In some examples, a computer-readable storage medium may include a non-transitory medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in RAM or cache).
Various examples have been described. These and other examples are within the scope of the following claims.
Claims
1. A method comprising:
- displaying, by a display of a wearable computing device, a first content card of a list of content cards;
- receiving, by the wearable computing device, motion data generated by a motion sensor of the wearable computing device that represents motion of a forearm of a user of the wearable computing device; and
- in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a first movement that includes a repeated pronation and supination of the forearm of the user within a period of time, displaying, by the display, a home screen.
2. The method of claim 1, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, the method further comprising:
- in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a second movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, displaying, by the display, a content card of the list of content cards at a second hierarchical level of the plurality of hierarchical levels that is lower than the first hierarchical level.
3. The method of claim 1, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, the method further comprising:
- in response to determining, by the wearable computing device and based on the motion data, that the user of the wearable computing device has performed a third movement that includes a raising of at least the distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, displaying, by the display, a content card of a list of content cards at a third hierarchical level of the plurality of hierarchical levels that is higher than the first hierarchical level.
4. The method of claim 1, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, wherein the home screen is a content card of the list of content cards that is not a content card of the list of content cards at a lower hierarchical level than the first content card, a content card of the list of content cards at a higher hierarchical level than the first content card, or the first content card.
5. The method of claim 1, wherein the wearable computing device comprises a smartwatch.
6. The method of claim 1, wherein the wearable computing device comprises an activity tracker.
7. A wearable computing device configured to be worn on a forearm of a user, the wearable computing device comprising;
- a display component that displays content cards;
- at least one motion sensor that detects movement of the wearable computing device and generates, based on the movement, motion data that represents motion of the forearm of the user of the wearable computing device;
- one or more processors; and
- at least one module operable by the one or more processors to: cause the display component to display a first content card of a list of content cards; and responsive to determining, based on the motion data, that the user of the wearable computing device has performed a first movement that includes a repeated pronation and supination of the forearm of the user within a period of time, cause the display component to display a home screen.
8. The wearable computing device of claim 7, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, and wherein, responsive to determining, based on the motion data, that the user of the wearable computing device has performed a second movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, the at least one module is further operable to cause the display component to display a content card of the list of content cards at a second hierarchical level of the plurality of hierarchical levels that is lower than the first hierarchical level.
9. The wearable computing device of claim 7, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, and wherein, in response to determining, based on the motion data, that the user of the wearable computing device has performed a third movement that includes a raising of at least a distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, the at least one module is further operable to cause the display component to display a content card of a list of content cards at a third hierarchical level of the plurality of hierarchical levels that is higher than the first hierarchical level.
10. The wearable computing device of claim 7, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, wherein the home screen is a content card of the list of content cards that is not a content card of the list of content cards at a lower hierarchical level than the first content card, a content card of the list of content cards at a higher hierarchical level than the first content card, or the first content card.
11. The wearable computing device of claim 7, wherein the wearable computing device comprises a smartwatch.
12. The wearable computing device of claim 7, wherein the wearable computing device comprises an activity tracker.
13. A computer-readable storage medium storing instructions that, when executed, cause one or more processors of a wearable computing device to:
- output for display, by a display component of a wearable computing device, a first content card of a list of content cards;
- receive motion data generated by a motion sensor of the wearable computing device that represents motion of a forearm of a user of the wearable computing device; and
- responsive to determining, based on the motion data, that the user of the wearable computing device has performed a first movement that includes a repeated pronation and supination of the forearm of the user within a period of time, output for display, by the display component, a home screen.
14. The computer-readable storage medium of claim 13, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, and wherein the computer-readable storage medium further stores instructions that cause the one or more processors to:
- responsive to determining, based on the motion data, that the user of the wearable computing device has performed a second movement that includes a lowering of at least a distal end of the forearm of the user away from a head of the user followed by a raising of at least the distal end of the forearm of the user toward the head of the user, output for display, by the display component, a content card of the list of content cards at a second hierarchical level of the plurality of hierarchical levels that is lower than the first hierarchical level.
15. The computer-readable storage medium of claim 13, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, and wherein the computer-readable storage medium further stores instructions that cause the one or more processors to:
- responsive to determining, based on the motion data, that the user of the wearable computing device has performed a third movement that includes a raising of at least the distal end of the forearm of the user towards the head of the user followed by a lowering of at least the distal end of the forearm of the user away from the head of the user, output for display, by the display component, a content card of a list of content cards at a third hierarchical level of the plurality of hierarchical levels that is higher than the first hierarchical level.
16. The computer-readable storage medium of claim 13, wherein the first content card corresponds to a first hierarchical level of a plurality of hierarchical levels, wherein the home screen is a content card of the list of content cards that is not a content card of the list of content cards at a lower hierarchical level than the first content card, a content card of the list of content cards at a higher hierarchical level than the first content card, or the first content card.
17. The computer-readable storage medium of claim 13, wherein the wearable computing device comprises a smartwatch.
18. The computer-readable storage medium of claim 13, wherein the wearable computing device comprises an activity tracker.
Type: Application
Filed: Oct 13, 2017
Publication Date: Apr 12, 2018
Inventors: Rodrigo Lima Carceroni (Mountain View, CA), Pannag R. Sanketi (Fremont, CA), Suril Shah (Mountain View, CA), Derya Ozkan (Mountain View, CA), Soroosh Mariooryad (San Jose, CA), Seyed Mojtaba Seyedhosseini Tarzjani (Sunnyvale, CA), Brett Lider (San Francisco, CA), Peter Wilhelm Ludwig (San Francisco, CA)
Application Number: 15/783,135