Dynamic expansion of data visualizations

- DOMO, INC.

A user can dynamically invoke and control the display of secondary data visualizations based on a selected element of a primary data visualization. Previews of the secondary data visualizations are presented as the user interacts with the primary visualization. In response to user input, previews can be dynamically expanded, allowing a user to dynamically “drill down” into selected elements of the primary data visualization. Any suitable input mechanism can be used, including for example, a gesture such as a two-finger spreading motion to invoke previews of available secondary visualizations, wherein the axis defined by two points of contact determines which of the displayed previews of secondary visualizations is highlighted and/or expanded. In various embodiments, a hierarchy of visualizations can be established, and the user can navigate among two or more levels of visualizations in the same interactive manner.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. application Ser. No. 13/535,019, entitled “Dynamic Expansion of Data Visualizations,” filed Jun. 27, 2012, which claims the benefit under 35 U.S.C. § 119(e) of U.S. Provisional Application Ser. No. 61/506,912 for “Drill by Expansion,” filed Jul. 12, 2011, the entire contents of each of which are incorporated herein by reference.

FIELD OF THE INVENTION

The present invention relates to interactive graphical displays of data visualizations representing quantitative data.

DESCRIPTION OF THE RELATED ART

Electronic devices—such as desktop, laptop, and tablet computers, as well as mobile devices such as smartphones—are often capable of storing and displaying various forms of data visualizations (or “reports”) representing quantitative data. Data visualizations can represent any information, such as financial information, marketing information, and/or the like, in any suitable tabular, text-based, and/or graphical format.

It is known in the art to provide interactivity for data visualizations. In many computing environments, including web-based applications such as browsers for presenting web pages, a user can interact with a data visualization so as to change the format and/or nature of the displayed data, to highlight certain elements and/or obscure others, and/or to zoom into and out of a displayed report.

One particular form of interactivity is to provide a mechanism for a user to invoke and/or control the display of secondary reports from a primary report. Yuen, U.S. Pat. No. 5,423,033 for “Report Generation System and Method”, issued Jun. 6, 1995, describes a report generation system and method wherein a secondary report can be generated containing detailed information concerning a specific data element of a primary report. A user selects a data element on a primary report; upon activation, a secondary report is generated, using new parameters determined by the particular data element selected by the user.

Yuen's technique, and similar techniques, offer limited functionality as to the type(s) of secondary report that can be generated, and as to the degree of user control of the nature and format of the secondary report. In general, such techniques generate a single type of secondary report based solely on the user's selection of a particular data element. The user is not generally able to interactively select among a plurality of available secondary reports or visualizations directly from the primary report.

SUMMARY

According to various embodiments of the present invention, a user interface is provided—for a computer or other electronic device that displays quantitative data in graphical form—which allows a user to dynamically invoke and control the display of secondary data visualizations based on a selected element of a primary data visualization. In at least one embodiment, previews of these secondary data visualizations are presented in response to user interaction with the primary visualization. In response to user input, one or more of the previews can be dynamically expanded. This allows a user to dynamically “drill down” into selected aspects and/or elements of the primary data visualization, in a manner that is highly user-configurable, interactive, and responsive.

In at least one embodiment, the system and method of the present invention are implemented in such a manner as to respond to direct manipulation of the displayed elements, for example via a touch-sensitive screen. Any touch-sensitive, proximity-sensitive, or gesture-based system can be used. Known gestures such as pinching and rotating can be interpreted in an intuitive manner to provide improved control and feedback in response to user input.

For example, in at least one embodiment, a gesture including a two-finger spreading motion invokes previews of available secondary visualizations for a given element of a displayed primary visualization. The position at which the gesture is performed specifies which data element of the primary visualization is being explored. The axis defined by the two points of contact determines which of the displayed previews of secondary visualizations is to be highlighted and/or expanded; the user can rotate his or her fingers to change the axis and thereby highlight and/or expand different secondary visualizations. In at least one embodiment, the user can tap on a displayed preview to expand it, or can increase the distance between the spread fingers, or perform some other action to cause the displayed preview to be expanded.

As described in more detail herein, in various embodiments, a hierarchy of visualizations can be established, and the user can navigate among primary, secondary, tertiary, and/or additional levels of visualizations in a similar interactive manner. The system and method of the present invention thereby provide an improved level of user control and interactivity in the display of visualizations on an electronic device.

For purposes of the description herein, the terms “report”, “data visualization”, “visualization”, and “graph” are used interchangeably to refer to any suitable representation or representations of quantitative data, with the examples depicted and described herein being provided for illustrative purposes with no intention of limiting the invention to those particular types of visualizations. Such representations can be graphical, tabular, text-based, or any combination thereof.

Further details and variations are described herein.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings illustrate several embodiments of the invention. Together with the description, they serve to explain the principles of the invention according to the embodiments. One skilled in the art will recognize that the particular embodiments illustrated in the drawings are merely exemplary, and are not intended to limit the scope of the present invention.

FIG. 1A is a block diagram depicting a hardware architecture for practicing the present invention according to one embodiment of the present invention.

FIG. 1B is a block diagram depicting a hardware architecture for practicing the present invention in a client/server environment, according to one embodiment of the present invention.

FIG. 2A is a flowchart depicting a method of dynamically expanding a data visualization in response to user input, according to one embodiment of the present invention.

FIG. 2B is a flowchart depicting a method of hierarchical expansion of a data visualization in response to user input, according to one embodiment of the present invention.

FIGS. 3 through 10 are a series of screen shots illustrating an example of dynamic expansion of a bar graph data visualization in response to user input, according to one embodiment of the present invention.

FIGS. 11 through 14 are a series of screen shots illustrating an example of two-level hierarchical expansion of a bar graph data visualization in response to user input, according to one embodiment of the present invention.

DETAILED DESCRIPTION OF THE EMBODIMENTS

System Architecture

According to various embodiments, the present invention can be implemented on any electronic device equipped to receive, store, and present quantitative data. Such an electronic device may be, for example, a desktop computer, laptop computer, smartphone, tablet computer, or the like.

Although the invention is described herein in connection with an implementation in a computer, one skilled in the art will recognize that the techniques of the present invention can be implemented in other contexts, and indeed in any suitable device capable of presenting quantitative data graphically and/or interactively. Accordingly, the following description is intended to illustrate various embodiments of the invention by way of example, rather than to limit the scope of the claimed invention.

Referring now to FIG. 1A, there is shown a block diagram depicting a hardware architecture for practicing the present invention, according to one embodiment. Such an architecture can be used, for example, for implementing the techniques of the present invention in a computer or other device 101. Device 101 may be any electronic device equipped to receive, store, and present quantitative data, and to receive user input in connect with such quantitative data.

In at least one embodiment, device 101 has a number of hardware components well known to those skilled in the art. Input device 102 can be any element that receives input from user 100, including, for example, a keyboard, mouse, stylus, touch-sensitive screen (touchscreen), touchpad, trackball, accelerometer, five-way switch, microphone, or the like. Input can be provided via any suitable mode, including for example, one or more of: pointing, tapping, typing, dragging, and/or speech. Display screen 103 can be any element that graphically displays quantitative data.

Processor 104 can be a conventional microprocessor for performing operations on data under the direction of software, according to well-known techniques. Memory 105 can be random-access memory, having a structure and architecture as are known in the art, for use by processor 104 in the course of running software.

Data store 106 can be any magnetic, optical, or electronic storage device for data in digital form; examples include flash memory, magnetic hard drive, CD-ROM, DVD-ROM, or the like. In at least one embodiment, data store 106 stores information describing quantitative data 107.

Data store 106 can be local or remote with respect to the other components of device 101. In at least one embodiment, device 101 is configured to retrieve data from a remote data storage device when needed. Such communication between device 101 and other components can take place wirelessly, by Ethernet connection, via a computing network such as the Internet, or by any other appropriate means. This communication with other electronic devices is provided as an example and is not necessary to practice the invention.

In at least one embodiment, data store 106 is detachable in the form of a CD-ROM, DVD, flash drive, USB hard drive, or the like. Quantitative data 107 can be entered into such a detachable data store 106 from a source outside of device 101 and later displayed after data store 106 is connected to device 101. In another embodiment, data store 106 is fixed within device 101.

Referring now to FIG. 1B, there is shown a block diagram depicting a hardware architecture for practicing the present invention in a client/server environment, according to one embodiment of the present invention. Such an implementation may use a “black box” approach, whereby data storage and processing are done completely independently from user input/output. An example of such a client/server environment is a web-based implementation, wherein client device 108 runs a browser that provides a user interface for interacting with web pages and/or other web-based resources from server 110. Data visualizations can be presented as part of such web pages and/or other web-based resources, using known protocols and languages such as HyperText Markup Language (HTML), Java, JavaScript, and the like.

Client device 108 can be any electronic device incorporating input device 102 and display screen 103, such as a desktop computer, laptop computer, personal digital assistant (PDA), cellular telephone, smartphone, music player, handheld computer, tablet computer, kiosk, game system, or the like. Any suitable communications network 109, such as the Internet, can be used as the mechanism for transmitting data between client 108 and server 110, according to any suitable protocols and techniques. In addition to the Internet, other examples include cellular telephone networks, EDGE, 3G, 4G, long term evolution (LTE), Session Initiation Protocol (SIP), Short Message Peer-to-Peer protocol (SMPP), SS7, WiFi, Bluetooth, ZigBee, Hypertext Transfer Protocol (HTTP), Secure Hypertext Transfer Protocol (SHTTP), Transmission Control Protocol/Internet Protocol (TCP/IP), and/or the like, and/or any combination thereof. In at least one embodiment, client device 108 transmits requests for data via communications network 109, and receives responses from server 110 containing the requested data.

In this implementation, server 110 is responsible for data storage and processing, and incorporates data store 106 for storing quantitative data 107. Server 110 may include additional components as needed for retrieving data from data store 106 in response to requests from client device 108.

In at least one embodiment, quantitative data 107 is organized into one or more well-ordered data sets, with one or more data entries in each set. Data store 106, however, can have any suitable structure. Accordingly, the particular organization of quantitative data 107 within data store 106 need not resemble the form in which it is displayed to user 100. In at least one embodiment, an identifying label is also stored along with each data entry, to be displayed along with each data entry.

Quantitative data can be retrieved from client-based or server-based data store(s), and/or from any other source. In at least one embodiment, input device 102 is configured to receive data entries from user 100, to be added to quantitative data 107 held in data store 106. User 100 may provide such data entries via the hardware and software components described above according to means that are well known to those skilled in the art.

Display screen 103 presents one or more data visualizations that present quantitative data 107 in some visual form, whether text-based, tabular, graphical, interactive, and/or any other suitable format. Such data visualizations may, for example, take the form of bar graphs or other visual graphs that present all of, or some subset of, quantitative data 107. In at least one embodiment where only some of quantitative data 107 is presented at a time, a dynamic control, such as a scrolling mechanism, may be available via input device 102 to change which data entries are currently displayed, and/or to alter the manner in which the data is displayed.

In at least one embodiment, data visualizations presented via display screen 103 include visual cues, such as height, distance, and/or area, to convey the value of each data entry. In at least one embodiment, labels accompany data entries on display screen 103, or can be displayed when user 100 taps on or clicks on a data entry, or causes an on-screen cursor to hover over a data entry.

Method

As described in more detail herein, various embodiments of the present invention provide techniques for dynamically expanding aspects of displayed data visualizations in an interactive manner, such as for example in response to user input.

Referring now to FIG. 2A, there is shown a flowchart depicting a method of dynamically expanding a data visualization in response to user input, according to one embodiment of the present invention. Referring also to FIGS. 3 through 10, there is shown a series of screen shots illustrating an example of dynamic expansion of a bar graph data visualization in response to user input, according to one embodiment of the present invention. Although the example of FIGS. 3 through 10 will be used to illustrate the method of FIG. 2A, one skilled in the art will recognize that the particular depictions in the example are merely provided for illustrative purposes, and that the invention can be implemented using other techniques and mechanisms without departing from the essential characteristics of the invention as set forth in the claims.

A primary visualization is displayed 210. The primary visualization can be any representation of quantitative data 107, such as a graph, table, chart, and/or the like. FIG. 3 depicts an en example of a primary visualization in the form of a bar graph 300. Bar graph 300 includes a number of rectangles 301A-301G, each representing a value corresponding to a data entry. For example, the height of each rectangle 301 can indicate the value of the corresponding data entry. Bar graph 300 can include labels to indicate the scale of the vertical axis, and to identify each of the rectangles 301A-301G as being associated with a point in time or some other relevant data category. Such labels are not shown in the examples presented herein.

According to various embodiments, user 100 can interact with the primary visualization in any of a number of different ways. In at least one embodiment, display screen 103 is a touch-sensitive screen, allowing user 100 to interact with the primary visualization by touching the screen or causing a stylus or other object to touch the screen. In at least one embodiment, user 100 can move an on-screen cursor, via an input device 102 such as a mouse, trackball, touchpad, keyboard, five-way switch, and/or any other suitable device, to point to and interact with various areas of the primary visualization. Such interactions may include resizing, moving, scrolling, and/or reformatting the primary visualization, and/or performing any other type of operation in connection with the visualization.

In at least one embodiment, user 100 can provide input causing one or more secondary visualization(s) to be displayed. Such secondary visualization(s) may provide more details regarding a particular aspect or element of the primary visualization (such as a rectangle 301 of graph 300), and/or they may present data in a different form than the primary visualization. For example, a secondary visualization may depict the same or similar data as that displayed in the first visualization, but displayed according to a different dimension or in a different format or scale. As another example, a secondary visualization may depict a subset or superset of the data displayed in the first visualization.

As an example, a primary data visualization may depict total yearly sales. Secondary visualizations associated with such a primary visualization may include total yearly sales subdivided by various categories such as:

    • Total yearly sales by salesperson
    • Total yearly sales by sales channel
    • Total yearly sales by product line
    • Total yearly sales by industry
    • Total yearly sales by marketing campaign
    • Total yearly sales by customer role

In at least one embodiment, the input to activate a secondary visualization may include, for example, tapping or clicking on a particular element of the primary visualization, or activating a keyboard command, or any other suitable technique. The position at which such input is provided may determine the particular aspect of the primary visualization to be expanded or presented in the secondary visualization, and/or it may determine the format and/or nature of the secondary visualization. For example, tapping on a particular rectangle 301 may cause secondary visualization(s) associated with a corresponding data value to be made available.

In at least one embodiment, a set of previews of available secondary visualizations can be presented, to allow user 100 to select a desired secondary visualization more intelligently, and to give user 100 a sense of what kind of secondary visualizations are available. Accordingly, in response to receiving 211 user input to activate previews of secondary visualizations, previews are displayed 212. In the example described above, previews could be displayed for the various visualizations depicting total yearly sales subdivided by the above-listed categories.

Any suitable form of input can be provided for activating previews of available secondary visualizations. In one example, a pinch gesture is used, as described in more detail herein. Alternatively, input for activating previews of available secondary visualizations can be provided in the form of tapping, gesturing, clicking, keyboard input, interaction with on-screen buttons or links, and/or the like. Voice command can also be used.

The position at which the user's input is provided may determine which previews are displayed; for example, tapping on a particular rectangle 301 may activate display of previews for secondary visualization(s) associated with a data value corresponding to that rectangle 301.

In at least one embodiment, user 100 performs some sort of gesture to activate previews of secondary visualizations, wherein the gesture is detected by input device 102 and/or by a touch-sensitive display screen 103. For example as depicted in FIG. 4, in embodiments wherein display screen 103 is a touch-sensitive screen capable of detecting two or more points of contact, user 100 can activate previews of secondary visualizations by performing a pinch-to-zoom gesture (moving two fingers farther apart from one another while both fingers are touching display screen 103). As depicted in FIGS. 4 and 5, there are two points of contact 401A, 401B; spreading these points 401A, 401B apart by movement of user's 100 fingers causes previews 501A-501D to be displayed. In another embodiment, spreading the fingers may not be required; previews may be activated, for example, by any or all of the following:

    • touching the screen (activates previews corresponding to the element being displayed at the contact point);
    • touching the screen with at least two fingers (activates previews corresponding to the element being displayed at the contact point or the midpoint between the contact points; also defines an axis that can be used to select a displayed preview, as discussed below; pinch gesture not required);
    • causing a cursor to hover at a certain location on the screen (activates previews corresponding to the element being displayed at the hover location);
    • clicking a mouse or performing some other activation command while a cursor is at a certain location on the screen (activates previews corresponding to the element at the location);
    • entering a keyboard command indicating a location on the screen (activates previews corresponding to the element at the location).

In at least one embodiment, the on-screen location of the input (for example the position of contact points 401A, 401B, or the midpoint between contact points 401A, 401B) determines which portion of the displayed data is to be expanded by presentation of secondary visualizations. Accordingly, in the example of FIG. 5, the four previews 501A-501D depict available secondary visualizations that relate to rectangle 301B, since the initial contact points 401A, 401B correspond to rectangle 301B. In addition, as shown in FIGS. 4 and 5, the particular data element to which the previews 501A-501D relate (in this case, rectangle 301B), can be highlighted or otherwise visually distinguished.

Previews 501 can take any suitable form. For example, as shown in FIG. 5, each preview 501 can be a miniature schematic representation of a type of data visualization that relates to the selected element of the primary data visualization. In this example, each preview 501 depicted in FIG. 5 is a miniature schematic representation of a different type of visualization that relates to the data represented by bar 301B.

In at least one embodiment, previews 501 for all available secondary visualizations are displayed. In at least one embodiment, previews 501 for a subset of available secondary visualizations are shown, for example if there are too many available secondary visualizations to effectively display previews 501 for all of them. The display may scroll through available visualizations, or may present only the most popular or suitable ones, or provide some mechanism for the user 100 to view previews other than those initially displayed. In at least one embodiment, a hierarchy of previews may be established, allowing user 100 to more easily navigate to the one he or she is interested in; such a technique is described in more detail below.

In at least one embodiment, user 100 can provide input 214 to cause one of the displayed previews 501 to be highlighted; the selected preview is highlighted 216. For example, user 100 may tap on one of the displayed previews 501 to cause it to be highlighted. Alternatively, user 100 may rotate the axis of the pinch gesture to cause different previews 501 to be highlighted. For example, as depicted in FIG. 6, axis 601 drawn between contact points 401A and 401B determines which preview 501 to highlight; in this case, preview 501C is highlighted. In at least one embodiment, axis 601 is not actually displayed, but is shown in FIG. 6 merely for illustrative purposes; in another embodiment, axis 601 is displayed.

In FIG. 7, user 100 has rotated his or her fingers, causing contact points 401A, 401B to shift position. Axis 601 now points to preview 501B, so that preview is now highlighted.

In FIG. 8, user 100 has rotated his or her fingers, causing contact points 401A, 401B to shift position. Axis 601 now points to preview 501D, so that preview is now highlighted.

Highlighted preview 501 can be displayed in any visually distinctive manner. For example, it can be brightened, shown in color (while other previews 501 are black-and-white), enlarged, and/or made dynamic; any other suitable effect may be applied. In at least one embodiment, previews 501 initially depict various types of visualizations without actually containing the visualizations themselves; highlighting a preview 501 may cause that preview to go “live”, showing actual data based on the currently selected element of the primary visualization. In at least one embodiment, highlighting a preview 501 causes that preview to be selected for further operations, such as hierarchical navigation, configuration, naming, modification, deletion, and/or the like.

In at least one embodiment, when previews 501 are displayed, the primary visualization is temporarily dismissed, grayed out, blurred, and/or shown in a subdued manner.

In at least one embodiment, additional information may be displayed for a selected preview 501. For example, a text box, ToolTip, or other element containing descriptive information may be shown for a preview 501 when that preview is selected. The descriptive element can be displayed alongside the selected preview 501, or on top of it (for example, in a translucent manner), or at some other location on the display screen. In another embodiment, an audio description (such as speech) of the selected preview 501 can be output on a speaker or similar component.

In at least one embodiment, if user 100 spreads his or her fingers further apart (or otherwise continues the gesture that caused previews 501 to be activated), all previews 501, or the selected preview 501, can dynamically expand in size. Dynamic resizing of previews 501, or the selected preview 501, can continue in response to further gestures by user 100; for example, previews 501, or the selected preview 501, can change their size dynamically based on the user 100 bringing his or her fingers closer together or farther apart.

In at least one embodiment, user 100 can shift finger position(s) with respect to the primary visualization, while previews 501 are being displayed. If user 100 shifts his or her fingers so that the center point between contact points 401A, 401B moves to a different element of the primary visualization (such as a different rectangle 301), the displayed previews 501 can be dynamically updated to reflect the newly selected element of the primary visualization. For example, if user 100 shifts so that the center point is now on rectangle 301C instead of rectangle 301B, previews 501 are updated so that they now depict available secondary visualizations for rectangle 301C. In at least one embodiment, user 100 can move his or her fingers around screen 103 to cause different elements of the primary visualization to be selected and thereby cause different sets of previews 501 to be displayed. In addition, rotation of the axis between contact points 401A, 401B can continue to dynamically change the selected preview 501.

In embodiments using different input modes, such dynamic updating of previews can be performed in manner suited to the particular input mode being used. For example, appropriate mechanisms for changing the set of displayed previews 501, and/or selecting particular previews 501, can be implemented for various types of input devices 102.

In at least one embodiment, user 100 can provide input 215 to cause one of the displayed previews 501 to be expanded; the selected preview is expanded 217. For example, in at least one embodiment, user 100 can remove his or her fingers from screen 103 while a particular preview 501 is selected, to cause that preview 501 to be expanded. Such a technique is depicted in the example of FIGS. 8 to 10. In FIG. 8, user 100 has positioned his or her fingers so that the axis formed by contact points 401A, 401B points to preview 501D, causing that preview 501D to be selected. In FIG. 9, user 100 removes his or her fingers from screen 103; this causes the selected preview 501D to expand. FIG. 10 depicts the completion of the expansion, so that selected preview 501D has transitioned into a full-sized display 1001 of a secondary visualization. Display 1001 may take up all of display screen 103 or some portion of display screen 103. In at least one embodiment, when full-sized display 1001 of a secondary visualization is presented, the primary visualization is dismissed, or subdued, or shown in the background. In at least one embodiment, the expansion of a selected preview 501D into a full-sized display 1001 takes place as a smooth transition. In at least one embodiment, user 100 can interact with the displayed secondary visualization.

In other embodiments, other input mechanisms can be used for invoking expansion of a preview 501. For example, user 100 can tap on or click on a displayed preview 501 to cause it to be expanded 217. Alternatively, user 100 can hit a key, click a mouse, or perform some other input operation to cause a displayed or selected 501 preview to be expanded 217.

Any suitable mechanism can be provided for causing dismissal of full-sized display 1001 of the secondary visualization. For example, user 100 may tap on the secondary visualization to dismiss it or to cause it to transition back to preview form. Alternatively, user 100 can click on a dismiss button, or hit a key, or perform some other input operation to cause the secondary visualization to be dismissed. In at least one embodiment, user 100 can interact with the secondary visualization, for example to perform additional operations to view and/or manipulate various aspects and elements of the secondary visualization in different ways.

In at least one embodiment, while previews 501 are displayed, user 100 can provide input 218 to cause the displayed previews 501 to be dismissed; in response, previews 501 are dismissed 219, and the original form of primary visualization 300 is restored. Any suitable input can be provided for causing such dismissal 219. For example, user 100 can tap on an area of screen 103, or click on a dismiss button, or hit a key, or perform some other input operation to cause previews 501 to be dismissed.

Hierarchy of Visualizations

In at least one embodiment, available secondary visualizations can be organized in a hierarchy. The user can navigate the hierarchy to find and select a particular visualization for viewing and/or activation. The hierarchy can be organized according to any suitable scheme, such as by data type, format, style, and/or the like. Any number of levels can be available within the hierarchy.

Any suitable mechanism can be provided for navigating the hierarchy of visualizations. In at least one embodiment, a hierarchy of previews is made available; a top level can be provided to indicate a particular type of visualization, and subordinate levels of previews can be provided so as to provide the user with information about individual visualizations within the type. In at least one embodiment, so as to avoid cluttering the screen with an excessive number of previews at any given time, each set of second-level previews associated with a particular first-level preview is displayed only when user 100 selects, activates, highlights, or otherwise interacts that corresponding first-level preview. In at least one embodiment, multiple (or all) available sets of second-level previews can be displayed concurrently. In at least one embodiment, user 100 can select which sets of second-level previews should be displayed at any given time. In at least one embodiment, similar techniques can be used for successively lower levels of previews.

Referring now to FIG. 2B, there is shown a flowchart depicting a method of hierarchical expansion of a data visualization in response to user input, according to one embodiment of the present invention. Referring also to FIGS. 11 through 14, there is shown a series of screen shots illustrating an example of two-level hierarchical expansion of a bar graph data visualization in response to user input, according to one embodiment of the present invention.

Although the example of FIGS. 11 through 14 will be used to illustrate the method of FIG. 2B, one skilled in the art will recognize that the particular depictions in the example are merely provided for illustrative purposes, and that the invention can be implemented using other techniques and mechanisms without departing from the essential characteristics of the invention as set forth in the claims.

A primary visualization is displayed 210. In response to receiving 231 user input to activate first-level previews of secondary visualizations, previews are displayed 232. Such display of first-level previews can be performed in a manner similar to that described above in connection with steps 211 and 212 of FIG. 2A.

In at least one embodiment, user 100 can provide input 233 to cause one of the displayed first-level previews to be highlighted; the selected first-level preview is highlighted 234. Such highlighting can be performed in a manner similar to that described above in connection with steps 214 and 216 of FIG. 2A.

In at least one embodiment, user 100 can provide input 235 to cause second-level previews to be displayed for a highlighted first-level preview. In response, second-level previews are displayed 236.

Any suitable mechanism can be used for allowing user 100 to provide input 235 to cause second-level previews to be displayed 236. In the example shown in FIG. 11, second-level previews 501E-H are displayed 236 in response to user 100 holding contact points 401A, 401B relatively steady while a particular first-level preview 501D is displayed. In other embodiments, other trigger actions may cause second-level previews 501E-H to be displayed 236; for example, user 100 can input a command, or tap or double-tap, or perform a gesture, spoken command, or any other input operation to cause second-level previews 501E-H to be displayed.

In the example, only those second-level previews 501E-H associated with the currently selected first-level preview 501D are displayed. If user 100 rotates his or her fingers so that axis 601 no longer points to first-level preview 501D, second-level previews 501E-H are dismissed. One skilled in the art will recognize that other input schemes are possible, including for example a scheme whereby previews are dismissed only upon receipt of explicit input from user 100 to dismiss them.

In the example, selected first-level preview 501D depicts a visualization that is a representative example of a category or type of visualizations. Other visualizations 501E-501H are part of the same category or type. The preview 501D selected for display as a representative example may be selected based on a determination that that visualization is a “best fit” for user's 100 needs, or it can be selected by some other means.

In another embodiment, each first-level preview can instead be an indication of a category or type, rather than a representative example depicting a particular visualization of that category or type; second-level visualizations can belong to the indicated category or type.

In at least one embodiment, the display of second-level previews 501E-501H is persistent; once they are displayed, they remain on the screen to allow user 100 to select among them and/or interact with them. For example, user 100 can drag, tap, click on, or otherwise interact with one of the displayed second-level previews to activate it, causing it to be expanded.

In the example, as shown in FIG. 12, user 100 removes his or her fingers from the screen, and second-level previews 501E-501H are still displayed (along with selected first-level preview 501D and other first-level previews 501A-C). In at least one embodiment, after user 100 removes his or her fingers from the screen, these second-level previews 501E-501H may remain on screen, either until further action by user 100, or for some predefined period of time, after which they may be automatically dismissed.

Referring again to FIG. 2B, user 100 can provide input 215 to cause one of the displayed previews 501 to be expanded; the selected preview is expanded 217. For example, in at least one embodiment, user 100 can tap on a displayed second-level preview 501 to select it. In at least one embodiment, user 100 can rotate contact points 401A, 401B (in a manner similar to that described above in connection with FIGS. 7 and 8) to select among displayed second-level previews 501, by causing an axis 601 defined by the contact points 401A, 401B to rotate to point do different second-level previews 501.

In the example, as shown in FIG. 13, user 100 can select a displayed second-level preview 501E, for example by tapping on it. The selected second-level preview 501E expands so that user 100 can see the associated data visualization. FIG. 14 depicts the displayed visualization 1400, including graph 301 that corresponds to the selected second-level preview 501E.

As described above, all transitions can be implemented in a smooth manner, with previews expanding, changing position, and/or being dismissed gradually using effects such as zooming in/out, fading in/out, and/or dissolving. One skilled in the art will recognize that any suitable transition effects can be used to reinforce relationships between elements as they are moved, introduced, dismissed, or otherwise transitioned from one state to another.

User 100 can dismiss displayed visualization 1400, for example by tapping on or clicking on a dismiss button or icon (not shown), or entering a keyboard command, or by another mechanism. In at least one embodiment, visualization 1400 may automatically be dismissed after some predefined period of time. In at least one embodiment, after visualization 1400 is dismissed, the display may return to its initial state, or to the state that was shown just before visualization 1400 was invoked, or to any other suitable state.

Referring again to FIG. 2B, in at least one embodiment, while previews 501 are displayed, user 100 can provide input 218 to cause the displayed previews 501 to be dismissed; in response, previews 501 are dismissed 219. Any suitable input can be provided for causing such dismissal 219. For example, user 100 can tap on an area of screen 103, or click on a dismiss button, or hit a key, or perform some other input operation to cause previews 501 to be dismissed.

One skilled in the art will recognize that the examples depicted and described herein are merely illustrative, and that other arrangements of user interface elements can be used. In addition, some of the depicted elements can be omitted or changed, and additional elements depicted, without departing from the essential characteristics of the invention.

The present invention has been described in particular detail with respect to possible embodiments. Those of skill in the art will appreciate that the invention may be practiced in other embodiments. First, the particular naming of the components, capitalization of terms, the attributes, data structures, or any other programming or structural aspect is not mandatory or significant, and the mechanisms that implement the invention or its features may have different names, formats, or protocols. Further, the system may be implemented via a combination of hardware and software, or entirely in hardware elements, or entirely in software elements. Also, the particular division of functionality between the various system components described herein is merely exemplary, and not mandatory; functions performed by a single system component may instead be performed by multiple components, and functions performed by multiple components may instead be performed by a single component.

Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment of the invention. The appearances of the phrases “in one embodiment” or “in at least one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.

In various embodiments, the present invention can be implemented as a system or a method for performing the above-described techniques, either singly or in any combination. In another embodiment, the present invention can be implemented as a computer program product comprising a non-transitory computer-readable storage medium and computer program code, encoded on the medium, for causing a processor in a computing device or other electronic device to perform the above-described techniques.

Some portions of the above are presented in terms of algorithms and symbolic representations of operations on data bits within a memory of a computing device. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.

It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing module and/or device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.

Certain aspects of the present invention include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the present invention can be embodied in software, firmware and/or hardware, and when embodied in software, can be downloaded to reside on and be operated from different platforms used by a variety of operating systems.

The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computing device selectively activated or reconfigured by a computer program stored in the computing device. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, DVD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, flash memory, solid state drives, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. Further, the computing devices referred to herein may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

The algorithms and displays presented herein are not inherently related to any particular computing device, virtualized system, or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will be apparent from the description provided herein. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the present invention as described herein, and any references above to specific languages are provided for disclosure of enablement and best mode of the present invention.

Accordingly, in various embodiments, the present invention can be implemented as software, hardware, and/or other elements for controlling a computer system, computing device, or other electronic device, or any combination or plurality thereof. Such an electronic device can include, for example, a processor, an input device (such as a keyboard, mouse, touchpad, trackpad, joystick, trackball, microphone, and/or any combination thereof), an output device (such as a screen, speaker, and/or the like), memory, long-term storage (such as magnetic storage, optical storage, and/or the like), and/or network connectivity, according to techniques that are well known in the art. Such an electronic device may be portable or non-portable. Examples of electronic devices that may be used for implementing the invention include: a mobile phone, personal digital assistant, smartphone, kiosk, server computer, enterprise computing device, desktop computer, laptop computer, tablet computer, consumer electronic device, or the like. An electronic device for implementing the present invention may use any operating system such as, for example and without limitation: Linux; Microsoft Windows, available from Microsoft Corporation of Redmond, Wash.; Mac OS X, available from Apple Inc. of Cupertino, Calif.; iOS, available from Apple Inc. of Cupertino, Calif.; Android, available from Google, Inc. of Mountain View, Calif.; and/or any other operating system that is adapted for use on the device.

While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of the above description, will appreciate that other embodiments may be devised which do not depart from the scope of the present invention as described herein. In addition, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure of the present invention is intended to be illustrative, but not limiting, of the scope of the invention, which is set forth in the claims.

Claims

1. A computer-implemented method for interacting with a displayed data visualization, comprising:

displaying a primary data visualization on a touch-sensitive display screen, the primary data visualization comprising a plurality of display elements;
receiving, via the touch-sensitive display screen, a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining an axis having an orientation and the first user input being rotatable about the axis;
responsive to receiving the two points of contact with the touch-sensitive display screen, selecting a first one of the display elements that is centrally positioned between the two points of contact;
responsive to selecting the first one of the display elements, dynamically presenting for display on the touch-sensitive display screen a plurality of previews of secondary data visualizations relating to the first one of the display elements;
receiving a rotation of the first user input causing a rotated orientation of the defined axis, the rotation of the first user input including rotating the two points of contact on the touch-sensitive display screen to rotate the defined axis;
responsive to receiving the rotation of the first user input, highlighting one of the plurality of previews of secondary data visualizations based on the rotated orientation of the defined axis with respect to a display position of the plurality of previews of secondary data visualizations;
receiving a resizing of the first user input on the touch-sensitive display screen, the resizing of the first user input causing a change in a distance between the two points of contact on the rotated defined axis on the touch-sensitive display screen; and
responsive to receiving the resizing of the first user input, resizing a presentation of the highlighted one of the plurality of previews of secondary data visualizations.

2. The method of claim 1, further comprising:

receiving, via the touch-sensitive display screen, a second user input selecting one of the plurality previews of secondary data visualizations; and
responsive to receiving the second user input, displaying a secondary data visualization corresponding to the selected one of the plurality of previews of secondary data visualizations.

3. The method of claim 1, wherein dynamically presenting for display the plurality of previews of secondary data visualizations includes displaying the plurality of previews of secondary data visualizations concurrently with a continued display of the primary data visualization on the touch-sensitive display screen.

4. The method of claim 1, wherein the secondary data visualizations comprise different representations of data associated with the first one of the display elements.

5. The method of claim 1, wherein the two points of contact with the touch-sensitive display screen are proximate to a display location of the first one of the display elements and the first one of the display elements is centrally positioned between the two points of contact.

6. The method of claim 1, wherein the two points of contact define a pinch gesture.

7. The computer-implemented method of claim 1, further comprising:

selecting the highlighted one of the plurality of previews of secondary data visualizations, the selected one of the plurality of previews of secondary data visualizations being selected responsive to a user disengaging the two points of contact.

8. The computer-implemented method of claim 1, further comprising:

selecting the highlighted one of the plurality of previews of secondary data visualizations responsive to receiving a movement of the two points of contact apart from one another on the touch-sensitive display screen.

9. The computer-implemented method of claim 1, wherein the highlighted one of the plurality of previews of secondary data visualizations expands to illustrate a selection of the highlighted one of the plurality of previews of secondary data visualization responsive to a user disengaging the two points of contact.

10. A computer-implemented method for interacting with a displayed data visualization, comprising:

displaying a primary data visualization on a touch-sensitive display screen, the primary data visualization comprising a plurality of display elements;
receiving, via the touch-sensitive display screen, a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining an axis having an orientation and the first user input being rotatable about the axis;
responsive to receiving the two points of contact with the touch-sensitive display screen, selecting a first one of the display elements that is centrally positioned between the two points of contact;
responsive to selecting the first one of the display elements, dynamically presenting for display on the touch-sensitive display screen a plurality of previews of secondary data visualizations relating to the first one of the display elements;
receiving a rotation of the first user input causing a rotated orientation of the defined axis, the rotation of the first user input including rotating the two points of contact on the touch-sensitive display screen to rotate the defined axis;
determining a first one of the plurality of previews of secondary data visualizations based on the rotated orientation of the defined axis, the defined axis being aligned with the first one of the plurality of previews of secondary data visualizations;
receiving a resizing of the first user input on the touch-sensitive display screen, the resizing of the first user input causing a change in a distance between the two points of contact on the rotated defined axis on the touch-sensitive display screen;
responsive to receiving the resizing of the first user input, resizing a presentation of the first one of the plurality of previews of secondary data visualizations; and
receiving, via the touch-sensitive display screen, a second user input selecting the first one of the plurality of previews of secondary data visualizations.

11. The method of claim 10, further comprising:

responsive to receiving the second user input, displaying a secondary data visualization corresponding to the first one of the plurality of previews of secondary data visualizations.

12. The method of claim 10, wherein dynamically presenting for display the plurality of previews of secondary data visualizations includes displaying the plurality of previews of secondary data visualizations concurrently with a continued display of the primary data visualization on the touch-sensitive display screen.

13. The method of claim 10, wherein the secondary data visualizations comprise different representations of data associated with the first one of the display elements.

14. The method of claim 10, wherein the two points of contact with the touch-sensitive display screen are proximate to a display location of the first one of the display elements and the first one of the display elements is centrally positioned between the two points of contact.

15. The method of claim 10, wherein the two points of contact define a pinch gesture.

16. A computer-implemented method for interacting with a displayed data visualization, comprising:

displaying a primary data visualization on a touch-sensitive display screen, the primary data visualization comprising a plurality of display elements;
receiving, via an input device, a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining a first axis having a first orientation and the first user input being rotatable about the first axis;
responsive to receiving the two points of contact with the touch-sensitive display screen, selecting a first one of the display elements that is centrally positioned between the two points of contact;
responsive to selecting the first one of the display elements, dynamically presenting for display a plurality of previews of secondary data visualizations relating to the first one of the display elements;
receiving, via the touch-sensitive display screen, a second user input highlighting a first one of the plurality of previews of secondary data visualizations that are displayed, the second user input including two points of contact with the touch-sensitive display screen defining a second axis pointing to the first one of the plurality of previews of secondary data visualizations that are displayed, the second axis being a continuation of the first axis and rotatable to highlight a second one of the plurality of previews of secondary data visualizations that are display, wherein rotating the second axis includes rotating the two points of contact associated with the second user input on the touch-sensitive display screen;
receiving a resizing of the second user input on the touch-sensitive display screen, the resizing of the second user input causing a change in a distance between the two points of contact on the rotated second axis on the touch-sensitive display screen; and
responsive to receiving the resizing of the second user input, resizing a presentation of the highlighted one of the plurality of previews of secondary data visualizations.

17. A computer-implemented method for interacting with a displayed data visualization, comprising:

displaying a primary data visualization on a touch-sensitive display screen, the primary data visualization comprising a plurality of display elements;
receiving, via the touch-sensitive display screen, a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining an axis having an orientation and the first user input being rotatable about the axis;
responsive to receiving the two points of contact with the touch-sensitive display screen, selecting a first one of the display elements that is centrally positioned between the two points of contact;
responsive to selecting the first one of the display elements, dynamically presenting for display on the touch-sensitive display screen a plurality of first-level previews of secondary data visualizations relating to the first one of the display elements;
receiving a rotation of the first user input causing a rotated orientation of the defined axis, the rotation of the first user input including rotating the two points of contact on the touch-sensitive display screen to rotate the defined axis;
determining a first one of the plurality of first-level previews of secondary data visualizations based on the rotated orientation of the defined axis, the defined axis being aligned with the first one of the plurality of first-level previews of secondary data visualizations;
receiving a resizing of the first user input on the touch-sensitive display screen, the resizing of the first user input causing a change in a distance between the two points of contact on the rotated defined axis on the touch-sensitive display screen;
responsive to receiving the resizing of the first user input, resizing a presentation of the first one of the plurality of first-level previews of secondary data visualizations;
receiving, via the touch-sensitive display screen, a second user input selecting the first one of the plurality of first-level previews of secondary data visualizations; and
responsive to receiving the second user input, displaying a plurality of second-level previews of secondary data visualizations.

18. A system for dynamically expanding a displayed data visualization, comprising:

a processor; and
a touch-sensitive display screen, communicatively coupled to the processor, configured to: display a primary data visualization on the touch-sensitive display screen, the primary data visualization comprising a plurality of display elements; receive a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining an axis having an orientation and the first user input being rotatable about the axis; responsive to receiving the two points of contact with the touch-sensitive display screen, select a first one of the display elements that is centrally positioned between the two points of contact; responsive to selecting the first one of the display elements, dynamically present for display on the touch-sensitive display screen a plurality of previews of secondary data visualizations relating to the first one of the display elements; receive a rotation of the first user input causing a rotated orientation of the defined axis, the rotation of the first user input including rotating the two points of contact on the touch-sensitive display screen to rotate the defined axis; responsive to receiving the rotation of the first user input, highlight one of the plurality of previews of secondary data visualizations that are displayed based on the rotated orientation of the defined axis with respect to a display position of the plurality of previews of secondary data visualizations that are displayed; receive a resizing of the first user input on the touch-sensitive display screen, the resizing of the first user input causing a change in a distance between the two points of contact on the rotated defined axis the touch-sensitive display screen; and responsive to receiving the resizing of the first user input, resize a presentation of the highlighted one of the plurality of previews of secondary data visualizations.

19. A system for dynamically expanding a displayed data visualization, comprising:

a processor; and
a touch-sensitive display screen, communicatively coupled to the processor, configured to: display a primary data visualization on the touch-sensitive display screen, the primary data visualization comprising a plurality of display elements; receive a first user input including two points of contact with the touch-sensitive display screen, the two points of contact defining an axis having an orientation and the first user input being rotatable about the axis; responsive to receiving the two points of contact with the touch-sensitive display screen, select a first one of the display elements that is centrally positioned between the two points of contact; responsive to selecting the first one of the display elements, dynamically present for display on the touch-sensitive display screen a plurality of previews of secondary data visualizations relating to the first one of the display elements; receive a second user input including two points of contact with the touch-sensitive display screen selecting one of the plurality of previews of secondary data visualizations that are displayed by specifying an orientation of a second axis associated with the second user input, the second axis being a continuation of the defined axis, the second axis being rotated and aligned with the selected one of the plurality of previews of the plurality of secondary data visualizations that are displayed, wherein rotating the second axis includes rotating the two points of contact associated with the second user input on the touch-sensitive display screen; receive a resizing of the second user input on the touch-sensitive display screen, the resizing of the second user input causing a change in a distance between the two points of contact on the rotated defined axis on the touch-sensitive display screen; and responsive to receiving the resizing of the second user input, resize a presentation of the selected one of the plurality of previews of secondary data visualizations.
Referenced Cited
U.S. Patent Documents
5359712 October 25, 1994 Cohen et al.
5375201 December 20, 1994 Davoust
5416895 May 16, 1995 Anderson et al.
5423033 June 6, 1995 Yuen
5461708 October 24, 1995 Kahn
5550964 August 27, 1996 Davoust
5581678 December 3, 1996 Kahn
5586240 December 17, 1996 Khan et al.
5600775 February 4, 1997 King et al.
5625767 April 29, 1997 Bartell et al.
5634133 May 27, 1997 Kelley
5689667 November 18, 1997 Kurtenbach
5737557 April 7, 1998 Sullivan
5844558 December 1, 1998 Kumar et al.
5929854 July 27, 1999 Ross
5933823 August 3, 1999 Cullen et al.
5970471 October 19, 1999 Hill
5990888 November 23, 1999 Blades et al.
6016502 January 18, 2000 Haneda et al.
6023280 February 8, 2000 Becker et al.
6298174 October 2, 2001 Lantrip et al.
6484168 November 19, 2002 Pennock et al.
6529217 March 4, 2003 Maguire, III et al.
6577304 June 10, 2003 Yablonski et al.
6613100 September 2, 2003 Miller
6626959 September 30, 2003 Moise et al.
6707454 March 16, 2004 Barg et al.
6904427 June 7, 2005 Hagiwara
6940509 September 6, 2005 Crow et al.
6985898 January 10, 2006 Ripley et al.
6995768 February 7, 2006 Jou et al.
7002580 February 21, 2006 Aggala et al.
7103837 September 5, 2006 Sato
7249328 July 24, 2007 Davis
7353183 April 1, 2008 Musso
7421648 September 2, 2008 Davis
7522176 April 21, 2009 Tolle et al.
7546522 June 9, 2009 Tolle et al.
7605804 October 20, 2009 Wilson
7685159 March 23, 2010 Mitchell et al.
7689933 March 30, 2010 Parsons
7705847 April 27, 2010 Helfman et al.
7788606 August 31, 2010 Patel et al.
7809582 October 5, 2010 Wessling et al.
8089653 January 3, 2012 Kobashi
8099674 January 17, 2012 Mackinlay et al.
8130211 March 6, 2012 Abernathy
8145600 March 27, 2012 Lewis
8176096 May 8, 2012 Allyn et al.
8185839 May 22, 2012 Jalon
8201096 June 12, 2012 Robert et al.
8214747 July 3, 2012 Yankovich et al.
8245156 August 14, 2012 Mouilleseaux
8261194 September 4, 2012 Billiard et al.
8286098 October 9, 2012 Ju
8289316 October 16, 2012 Reisman
8291349 October 16, 2012 Park
8291350 October 16, 2012 Park
8296654 October 23, 2012 Ahlberg et al.
8434007 April 30, 2013 Morita
8456466 June 4, 2013 Reisman
8463790 June 11, 2013 Joshi et al.
8468466 June 18, 2013 Cragun
8499284 July 30, 2013 Pich et al.
8549432 October 1, 2013 Warner
8566700 October 22, 2013 Ueda
8578294 November 5, 2013 Eom
8579814 November 12, 2013 Fotiades et al.
8621391 December 31, 2013 Leffert et al.
8624858 January 7, 2014 Fyke
8627233 January 7, 2014 Cragun
8645863 February 4, 2014 Mandic et al.
8661358 February 25, 2014 Duncker et al.
8667418 March 4, 2014 Chaudhri et al.
8671343 March 11, 2014 Oberstein
8683389 March 25, 2014 Bar-Yam et al.
8686962 April 1, 2014 Christie
8707192 April 22, 2014 Robert et al.
8713467 April 29, 2014 Goldenberg et al.
8738814 May 27, 2014 Cronin
8745280 June 3, 2014 Cronin
8799826 August 5, 2014 Missig
8806336 August 12, 2014 Miyazawa
8812947 August 19, 2014 Maoz
8826178 September 2, 2014 Zhang
8826181 September 2, 2014 Mouilleseaux et al.
8863019 October 14, 2014 Pourshahid et al.
8878879 November 4, 2014 Lee
8886622 November 11, 2014 Parent et al.
8914740 December 16, 2014 Joos et al.
8959423 February 17, 2015 Hammoud
9026944 May 5, 2015 Kotler
9081494 July 14, 2015 Migos
9086794 July 21, 2015 Gil et al.
9098182 August 4, 2015 Migos
9104365 August 11, 2015 Sirpal
9182900 November 10, 2015 Choi
9195368 November 24, 2015 Kuscher
9201589 December 1, 2015 Nasraoui
9202297 December 1, 2015 Winters
9223496 December 29, 2015 Howard
9235978 January 12, 2016 Charlton
9244562 January 26, 2016 Rosenberg
9250789 February 2, 2016 Kobayashi
9251722 February 2, 2016 Miyazawa
9261989 February 16, 2016 Kuscher
9280263 March 8, 2016 Kim
9292199 March 22, 2016 Choi
9299170 March 29, 2016 Moon
9310907 April 12, 2016 Victor
9310993 April 12, 2016 Choi
9329769 May 3, 2016 Sekiguchi et al.
9354780 May 31, 2016 Miyake
9367198 June 14, 2016 Radakovitz et al.
9390349 July 12, 2016 Awano
9395826 July 19, 2016 Cronin
9400997 July 26, 2016 Beaver et al.
9424333 August 23, 2016 Bisignani et al.
9459791 October 4, 2016 Mouilleseaux et al.
9465452 October 11, 2016 Nishizawa
9477315 October 25, 2016 Fujimura
9513799 December 6, 2016 Fleizach
9582187 February 28, 2017 Gil
9588645 March 7, 2017 Heo
9612736 April 4, 2017 Lee
9652056 May 16, 2017 Park
9658766 May 23, 2017 Nan et al.
9678343 June 13, 2017 Kuehne
9716825 July 25, 2017 Manzari
9721375 August 1, 2017 Rivard
9733734 August 15, 2017 Chase
9733796 August 15, 2017 Warner
9747018 August 29, 2017 Han
9804726 October 31, 2017 Joos et al.
9817548 November 14, 2017 Lai
9875023 January 23, 2018 Brown
9880701 January 30, 2018 Hyun
9881645 January 30, 2018 Novikoff
9886183 February 6, 2018 Lee
9996171 June 12, 2018 Chase
10078421 September 18, 2018 Jeon
10168817 January 1, 2019 Hiraga
10254927 April 9, 2019 Missig
20020087894 July 4, 2002 Foley et al.
20020091678 July 11, 2002 Miller et al.
20030028504 February 6, 2003 Burgoon et al.
20030069873 April 10, 2003 Fox et al.
20030074292 April 17, 2003 Masuda
20030128883 July 10, 2003 Kim et al.
20030158855 August 21, 2003 Farnham et al.
20030167278 September 4, 2003 Baudel
20030193502 October 16, 2003 Patel et al.
20040150668 August 5, 2004 Myers
20040189717 September 30, 2004 Conally et al.
20040230599 November 18, 2004 Moore et al.
20050068320 March 31, 2005 Jaeger
20050091254 April 28, 2005 Stabb et al.
20050091612 April 28, 2005 Stabb et al.
20050134578 June 23, 2005 Chambers et al.
20050246643 November 3, 2005 Gusmorino et al.
20050275622 December 15, 2005 Patel et al.
20050278625 December 15, 2005 Wessling et al.
20060004718 January 5, 2006 McCully et al.
20060020623 January 26, 2006 Terai et al.
20060026535 February 2, 2006 Hotelling
20060036950 February 16, 2006 Himberger et al.
20060041178 February 23, 2006 Viswanathan et al.
20060095865 May 4, 2006 Rostom
20060136819 June 22, 2006 Tolle et al.
20060242164 October 26, 2006 Evans et al.
20060244735 November 2, 2006 Wilson
20060288284 December 21, 2006 Peters et al.
20070008300 January 11, 2007 Yang
20070022000 January 25, 2007 Bodart et al.
20070061714 March 15, 2007 Stuple et al.
20070083911 April 12, 2007 Madden
20070094592 April 26, 2007 Turner et al.
20070124677 May 31, 2007 de los Reyes
20070171716 July 26, 2007 Wright et al.
20070179969 August 2, 2007 Finley et al.
20070186173 August 9, 2007 Both et al.
20070186177 August 9, 2007 Both et al.
20070186186 August 9, 2007 Both et al.
20070189737 August 16, 2007 Chaudhri
20070245238 October 18, 2007 Fugitt et al.
20070252821 November 1, 2007 Hollemans
20070256029 November 1, 2007 Maxwell
20070271528 November 22, 2007 Park et al.
20080037051 February 14, 2008 Otsubo
20080115049 May 15, 2008 Tolle et al.
20080136754 June 12, 2008 Tsuzaki et al.
20080168404 July 10, 2008 Ording
20080180458 July 31, 2008 Favart et al.
20080195639 August 14, 2008 Freeman et al.
20080244454 October 2, 2008 Shibaike
20080307343 December 11, 2008 Robert et al.
20080309632 December 18, 2008 Westerman
20090006318 January 1, 2009 Lehtipalo et al.
20090007012 January 1, 2009 Mandic
20090007017 January 1, 2009 Anzures
20090024411 January 22, 2009 Albro et al.
20090070301 March 12, 2009 McLean et al.
20090077501 March 19, 2009 Partridge et al.
20090096812 April 16, 2009 Boixel et al.
20090106674 April 23, 2009 Bray et al.
20090150177 June 11, 2009 Buck et al.
20090164171 June 25, 2009 Wold et al.
20090210814 August 20, 2009 Agrusa et al.
20090235155 September 17, 2009 Ueda
20090282325 November 12, 2009 Radakovitz et al.
20090284478 November 19, 2009 De la Torre Baltierra
20090307213 December 10, 2009 Deng et al.
20090307622 December 10, 2009 Jalon et al.
20090307626 December 10, 2009 Jalon et al.
20090315848 December 24, 2009 Ku
20090319897 December 24, 2009 Kotler et al.
20090327213 December 31, 2009 Choudhary
20090327955 December 31, 2009 Mouilleseaux et al.
20090327963 December 31, 2009 Mouilleseaux
20090327964 December 31, 2009 Mouilleseaux
20100005008 January 7, 2010 Duncker et al.
20100005411 January 7, 2010 Duncker et al.
20100031203 February 4, 2010 Morris
20100067048 March 18, 2010 Suzuki
20100070254 March 18, 2010 Tsai et al.
20100077354 March 25, 2010 Russo
20100080491 April 1, 2010 Ohnishi
20100083172 April 1, 2010 Breeds et al.
20100083190 April 1, 2010 Roberts
20100087322 April 8, 2010 Yuan et al.
20100095234 April 15, 2010 Lane
20100097338 April 22, 2010 Miyashita
20100100849 April 22, 2010 Fram
20100138766 June 3, 2010 Nakajima
20100157354 June 24, 2010 Darwish
20100161374 June 24, 2010 Horta et al.
20100162152 June 24, 2010 Allyn et al.
20100185962 July 22, 2010 Zhang et al.
20100185989 July 22, 2010 Shiplacoff
20100188353 July 29, 2010 Yoon et al.
20100192102 July 29, 2010 Chmielewski
20100192103 July 29, 2010 Cragun
20100194778 August 5, 2010 Robertson et al.
20100199202 August 5, 2010 Becker
20100205563 August 12, 2010 Haapsaari et al.
20100211895 August 19, 2010 Mistry et al.
20100218115 August 26, 2010 Curtin et al.
20100231536 September 16, 2010 Chaudhri et al.
20100235726 September 16, 2010 Ording
20100235771 September 16, 2010 Gregg, III
20100238176 September 23, 2010 Guo et al.
20100251151 September 30, 2010 Alsbury et al.
20100251179 September 30, 2010 Cragun
20100251180 September 30, 2010 Cragun
20100275144 October 28, 2010 Dejoras et al.
20100275159 October 28, 2010 Matsubara et al.
20100283750 November 11, 2010 Kang et al.
20100299637 November 25, 2010 Chmielewski
20100299638 November 25, 2010 Choi
20100306702 December 2, 2010 Warner
20100312462 December 9, 2010 Gueziec et al.
20100312803 December 9, 2010 Gong et al.
20100332511 December 30, 2010 Stockton et al.
20110001628 January 6, 2011 Miyazawa
20110004821 January 6, 2011 Miyazawa
20110016390 January 20, 2011 Oh
20110016433 January 20, 2011 Shipley
20110018806 January 27, 2011 Yano
20110041098 February 17, 2011 Kajiya
20110050562 March 3, 2011 Schoen
20110055691 March 3, 2011 Carlen et al.
20110055760 March 3, 2011 Drayton
20110069019 March 24, 2011 Carpendale
20110074171 March 31, 2011 Maehara
20110074696 March 31, 2011 Rapp et al.
20110074716 March 31, 2011 Ono
20110074718 March 31, 2011 Yeh et al.
20110074719 March 31, 2011 Yeh
20110077851 March 31, 2011 Ogawa
20110115814 May 19, 2011 Heimendinger
20110141031 June 16, 2011 McCullough
20110148796 June 23, 2011 Hollemans
20110173569 July 14, 2011 Howes et al.
20110179376 July 21, 2011 Berestov et al.
20110188760 August 4, 2011 Wright et al.
20110199639 August 18, 2011 Tani
20110205163 August 25, 2011 Hinckley
20110209048 August 25, 2011 Scott et al.
20110209088 August 25, 2011 Hinckley
20110209093 August 25, 2011 Hinckley
20110212717 September 1, 2011 Rhoads et al.
20110234503 September 29, 2011 Fitzmaurice
20110270851 November 3, 2011 Mishina et al.
20110271233 November 3, 2011 Radakovitz et al.
20110276603 November 10, 2011 Bojanic et al.
20110279363 November 17, 2011 Shoji et al.
20110283231 November 17, 2011 Richstein et al.
20110291988 December 1, 2011 Bamji et al.
20110298708 December 8, 2011 Hsu
20110302490 December 8, 2011 Koarai
20110320458 December 29, 2011 Karana
20120011437 January 12, 2012 James et al.
20120032901 February 9, 2012 Kwon
20120036434 February 9, 2012 Oberstein
20120050192 March 1, 2012 Kobayashi
20120056836 March 8, 2012 Cha et al.
20120056878 March 8, 2012 Miyazawa
20120081375 April 5, 2012 Robert
20120084644 April 5, 2012 Robert
20120089933 April 12, 2012 Garand et al.
20120092286 April 19, 2012 O'Prey et al.
20120127206 May 24, 2012 Thompson et al.
20120133585 May 31, 2012 Han
20120144335 June 7, 2012 Abeln et al.
20120154269 June 21, 2012 Oki
20120162265 June 28, 2012 Heinrich et al.
20120166470 June 28, 2012 Baumgaertel
20120174034 July 5, 2012 Chae et al.
20120180002 July 12, 2012 Campbell
20120210275 August 16, 2012 Park
20120254783 October 4, 2012 Pourshahid
20120262489 October 18, 2012 Caliendo, Jr.
20120284753 November 8, 2012 Roberts
20120293427 November 22, 2012 Mukai
20120306748 December 6, 2012 Fleizach
20120319977 December 20, 2012 Kuge
20130002802 January 3, 2013 Mock
20130007577 January 3, 2013 Hammoud
20130007583 January 3, 2013 Hammoud
20130019175 January 17, 2013 Kotler et al.
20130019205 January 17, 2013 Gil et al.
20130033448 February 7, 2013 Yano
20130036380 February 7, 2013 Symons
20130047125 February 21, 2013 Kangas et al.
20130067391 March 14, 2013 Pittappilly
20130076668 March 28, 2013 Maeda
20130080444 March 28, 2013 Wakefield et al.
20130093782 April 18, 2013 Wakefield et al.
20130097177 April 18, 2013 Fan et al.
20130097544 April 18, 2013 Parker et al.
20130104079 April 25, 2013 Yasui
20130114913 May 9, 2013 Nagarajan et al.
20130127758 May 23, 2013 Kim
20130127911 May 23, 2013 Brown
20130145244 June 6, 2013 Rothschiller
20130145316 June 6, 2013 Heo
20130169549 July 4, 2013 Seymour
20130174032 July 4, 2013 Tse et al.
20130201106 August 8, 2013 Naccache
20130204862 August 8, 2013 Marchiori
20130219340 August 22, 2013 Linge
20130222265 August 29, 2013 Smith
20130222340 August 29, 2013 Ito
20130235071 September 12, 2013 Ubillos
20130254662 September 26, 2013 Dunko
20130275898 October 17, 2013 Fujimoto
20130293672 November 7, 2013 Suzuki
20130307861 November 21, 2013 Lang
20130321340 December 5, 2013 Seo
20130328804 December 12, 2013 Oshima
20130346906 December 26, 2013 Farago
20140019899 January 16, 2014 Cheng
20140022192 January 23, 2014 Hatanaka
20140033127 January 30, 2014 Choi
20140047380 February 13, 2014 Mak
20140071063 March 13, 2014 Kuscher
20140075388 March 13, 2014 Kuscher
20140078102 March 20, 2014 Araki
20140089828 March 27, 2014 Okuma
20140092100 April 3, 2014 Chen
20140101579 April 10, 2014 Kim
20140111422 April 24, 2014 Chow
20140111516 April 24, 2014 Hall et al.
20140129564 May 8, 2014 Kritt
20140157200 June 5, 2014 Jeon
20140157210 June 5, 2014 Katz
20140173457 June 19, 2014 Wang et al.
20140173530 June 19, 2014 Mesguich Havilio
20140189581 July 3, 2014 Kawannata
20140210759 July 31, 2014 Toriyama
20140215365 July 31, 2014 Hiraga
20140229871 August 14, 2014 Tai
20140245217 August 28, 2014 Asahara
20140267084 September 18, 2014 Krulce
20140282145 September 18, 2014 Dewan
20140313142 October 23, 2014 Yairi
20140331179 November 6, 2014 Tullis et al.
20140340204 November 20, 2014 O'Shea
20140351738 November 27, 2014 Kokovidis
20150009157 January 8, 2015 Chung
20150012854 January 8, 2015 Choi et al.
20150022432 January 22, 2015 Stewart
20150029095 January 29, 2015 Gomez
20150029553 January 29, 2015 Fujimoto
20150033165 January 29, 2015 Yoo
20150035800 February 5, 2015 Uchiyama
20150062046 March 5, 2015 Cho
20150066356 March 5, 2015 Kirsch
20150067555 March 5, 2015 Joo
20150074615 March 12, 2015 Han
20150106709 April 16, 2015 Kritt
20150135109 May 14, 2015 Zambetti
20150143233 May 21, 2015 Weksler et al.
20150153571 June 4, 2015 Ballard
20150160807 June 11, 2015 Vakharia
20150160843 June 11, 2015 Kim
20150169057 June 18, 2015 Shiroor
20150169096 June 18, 2015 Nishizawa
20150169530 June 18, 2015 Otero et al.
20150169531 June 18, 2015 Campbell et al.
20150186350 July 2, 2015 Hicks
20150186351 July 2, 2015 Hicks
20150205483 July 23, 2015 Takamura
20150212688 July 30, 2015 Mcmillan
20150227308 August 13, 2015 Kim
20150234562 August 20, 2015 Ording
20150261728 September 17, 2015 Davis
20150268805 September 24, 2015 Patel
20150286636 October 8, 2015 Elkhou et al.
20150301609 October 22, 2015 Park
20150338974 November 26, 2015 Stone
20150341212 November 26, 2015 Hsiao et al.
20150356160 December 10, 2015 Berwick et al.
20150363082 December 17, 2015 Zhao
20150378978 December 31, 2015 Gross et al.
20160055232 February 25, 2016 Yang et al.
20160070430 March 10, 2016 Kim et al.
20160070461 March 10, 2016 Herbordt
20160092080 March 31, 2016 Swanson
20160139695 May 19, 2016 Chase
20160147308 May 26, 2016 Gelman
20160188181 June 30, 2016 Smith
20160202892 July 14, 2016 Rath
20160253086 September 1, 2016 Jiang
20160259528 September 8, 2016 Foss
20160274686 September 22, 2016 Alonso Ruiz
20160274733 September 22, 2016 Hasegawa
20160274750 September 22, 2016 Stewart
20160283049 September 29, 2016 Faydi
20160283054 September 29, 2016 Suzuki
20160283081 September 29, 2016 Johnston
20160291849 October 6, 2016 Stockwell
20160306328 October 20, 2016 Ko et al.
20160313911 October 27, 2016 Langseth et al.
20160364367 December 15, 2016 Takayama
20160370994 December 22, 2016 Galu, Jr.
20170010781 January 12, 2017 Bostick
20170031587 February 2, 2017 Kimoto
20170060819 March 2, 2017 Rucine
20170102838 April 13, 2017 Roy et al.
20170109023 April 20, 2017 Cherna
20170109026 April 20, 2017 Ismailov
20170147188 May 25, 2017 Rong
20170185258 June 29, 2017 Fu
20170185281 June 29, 2017 Park
20170192658 July 6, 2017 Kim
20170193058 July 6, 2017 Fung et al.
20170199651 July 13, 2017 Pintoffl
20170221244 August 3, 2017 Hiraga
20170228138 August 10, 2017 Paluka
20170269696 September 21, 2017 Naidoo
20170269800 September 21, 2017 Park
20170277367 September 28, 2017 Pahud
20170287230 October 5, 2017 Gortler
20170315635 November 2, 2017 Chase
20170315721 November 2, 2017 Merel
20170329458 November 16, 2017 Kanemaru
20180040154 February 8, 2018 Gibb
20180069983 March 8, 2018 Cho
20180101239 April 12, 2018 Yin
20180152636 May 31, 2018 Yim
20180203596 July 19, 2018 Dhaliwal
20180239519 August 23, 2018 Hinckley
20180239520 August 23, 2018 Hinckley
20180246639 August 30, 2018 Han
20180329623 November 15, 2018 Usami
20190056856 February 21, 2019 Simmons
20190094850 March 28, 2019 Li
20190146643 May 16, 2019 Foss
Patent History
Patent number: 10474352
Type: Grant
Filed: Oct 15, 2015
Date of Patent: Nov 12, 2019
Assignee: DOMO, INC. (American Fork, UT)
Inventors: Alan Winters (Lindon, UT), Amir H. Raubvogel (Redwood City, CA)
Primary Examiner: Devona E Faulk
Assistant Examiner: Charles L Beard
Application Number: 14/884,597
Classifications
Current U.S. Class: Menu Or Selectable Iconic Array (e.g., Palette) (715/810)
International Classification: G06F 3/0488 (20130101); G06T 11/20 (20060101); G06F 3/0484 (20130101); G06F 3/0482 (20130101);