Apparatus, Method and Computer-Implemented Program for Editable Categorization

- Panasonic

The information appliance displays content in a visually perceptible grid from which the user selects a target content and upon selection of the processor automatically identifies related content each with an associated relatedness score. Movement by the user of the target content causes the related content to move and follow the target content as if attracted by an invisible spring force or tensile force. The system thus presents the user with a graphical representation of moving items of content which are attracted to the related content based on the degree of relatedness. In this way the user quickly learns how to control selection and organization of related content by mimicking movement of physical objects acting under kinematic forces that mimic natural objects.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
FIELD

The present disclosure relates generally to organization, categorization and extraction of computerized content, including computerized images, data and icons. More particularly the present disclosure relates to computer-implemented technology to assist a user in extracting and reorganizing desired content using a graphical user interface that models content as computer-generated physical objects in display space having kinematic properties that map to content relatedness properties. The system associates relatedness between targeted content and other content with a computer-generated physical parameter.

BACKGROUND

Computerized content can take many forms. In photographic applications the content is typically stored as raw image data files or as compressed image files (e.g., jpg format). In video applications the content is typically stored as a collection of image frames encoded using a suitable CODEC (e.g., mpeg format). Text applications may store content as generic text files, as application-specific files (e.g., Microsoft Word doc or docx format), or as printable files (e.g., pdf). Some applications store content comprising both text and image data. Examples include presentation software applications (e.g. Microsoft PowerPoint). Database applications typically store text and numeric data, and sometimes also image data according to a predefined data structure that assigns meaning to the stored data. Icon organizing and editing applications store icons as image data, in some cases with additional metadata.

When a user wishes to organize, categorize and extract content from a software system, such as those identified above or others, the process has heretofore been tedious and far from intuitive. Typically the software system requires the user to interact with a complex system of menus, dialog boxes or commands to achieve a desired content selection. Where the content includes a lot of non-text content, such as photographs, images, movies and the like, interaction becomes even more difficult because text searching techniques are not highly effective and may not even be available

Where the content data store is large, such as with a large collection of stored photographic images, the task of organizing, categorizing and extracting desired content can be quite daunting. There are some automated tools that can be used to categorize image content, based on image characteristic extraction, face/object recognition, and the like. However, these tools often retrieve too many hits, many of which the user must then manually reject.

SUMMARY

The disclosed system associates relatedness between targeted content and other content with a physical parameter. In this way, the disclosed system provides a user-friendly, natural way for a user to organize, categorize and extract content from a data store, such as a data store of digitized images or other visual content.

The system maps content relatedness (degree of relationship) onto computer-generated physical object properties. In the computer-generated display space, the items of content are depicted as moveable objects to which the physical object properties are associated. Using a suitable touch gesture, or pointing device selection operation, the user can select and move a desired item of content. In so doing, other related items of content move as if attracted to the selected content by an invisible attractive force (e.g., invisible spring force, invisible gravitational force, or other kind of force). Thus by dragging a selected content item, the items of related content will follow, exhibiting kinematic motion as if they were physical objects acted upon by the invisible attractive force, where the degree of relatedness defines the strength of that force. Thus strongly related contents are attracted by a stronger force than less related content. Thus by simply watching the content movement the user can tell how closely the content items relate to the selected content.

Because relatedness is mapped onto the computer-generated force, more strongly related items move more quickly towards the selected content, thereby causing the related content to naturally cluster with the most closely related content lying closer to the selected content than the less closely related content.

Although rendered in computer-generated display space, the selected item of content, and those related to it, move as if mimicking the behavior physical objects. The user thus learns quite quickly and naturally how to organize, categorize and extract content, simply by touch and drag or (click and drag) movements.

A general concept is to use some kind of physical parameter to represent the relatedness between targeted content and other contents. Examples of such physical parameters include but are not limited to: (a) force acting between related content (or icons) and targeted content (or icons), such as tensile force and/or attractive force; (b) speed with which related content (or icons) come close to targeted content (or icons); (c) final relative position of related content (or icons) to final position of targeted content (or icons); and combinations thereof.

Further areas of applicability will become apparent from the description provided herein. The description and specific examples in this summary are intended for purposes of illustration only and are not intended to limit the scope of the present disclosure.

DRAWINGS

The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.

FIG. 1 is a plan view of an exemplary information appliance, illustrating how the user organizes, categorizes and extracts content;

FIG. 2 is a schematic block diagram of a computer hardware implementation of the information appliance of FIG. 1;

FIG. 3 is a user interface diagram, illustrating in greater detail the user interface components of the information appliance of FIG. 1;

FIG. 4a is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content are related prior to movement of a selected content by the user;

FIG. 4b is a detailed user interface diagram, showing the grid component of the user interface of FIG. 3 and illustrating how items of content move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness;

FIG. 5 illustrates an alternate embodiment that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory;

FIG. 6 is a software block diagram, illustrating the manner of programming the computer hardware of FIG. 2, it being understood that the depicted software is stored in the computer memory and operated upon by the CPU;

FIG. 7 is a flowchart diagram depicting one preferred embodiment whereby selected content is analyzed and motion of that content is generated by the suitably programmed computer hardware;

FIG. 8 is a graphical representation of one embodiment of a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen;

FIG. 9 is a flow chart diagram explaining the operation of the information appliance according to the model of FIG. 8;

FIGS. 10a-10d illustrate the information appliance in use, performing a basic screen transition according to user operation;

FIGS. 11a-11d illustrate the information appliance in use, creating a personal category for an individual user or the user's family;

FIGS. 12a-12d illustrate the information appliance in use, creating a different category from the same content;

FIGS. 13a-13d illustrate the information appliance in use, creating a compound cluster from plural previously created clusters;

FIG. 14a is a detailed user interface diagram, showing an alternate embodiment of how items of content are related prior to movement of a selected content by the user according to a tree structure;

FIG. 14b is a detailed user interface diagram, illustrating how items of content of FIG. 14a move according to a user-designated trajectory and further illustrating how items of content are rearranged during such movement according to their respective degrees of relatedness and based on the tree structure;

FIG. 15 is a graphical representation of the alternate embodiment of FIGS. 14a and 14b, namely a computer-implemented model for controlling how motion of content is generated by the suitably programmed computer hardware, the model featuring a display space reflecting how items of content are positioned and move within the display space of the computer screen according to a tree structure; and

FIG. 16 illustrates a variation of the embodiment of FIGS. 14a and 14b that features a control mechanism that permits a user to adjust a relatedness threshold or correlation metric threshold which regulates how many items of content are attracted during movement along the user-designated trajectory.

Corresponding reference numerals indicate corresponding parts throughout the several views of the drawings.

DETAILED DESCRIPTION

Example embodiments will now be described more fully with reference to the accompanying drawings.

As noted above, the computer-implemented apparatus and method for organization, categorization and extraction of computerized content, defines a relationship between relatedness between targeted content and other content using a predefined physical parameter. The present description will explain in detail how to implement different examples of such apparatus and methods, using different examples of predefined physical parameters. By way of non-limiting example, the predefined physical parameter can be a kinematic-related parameter, such as be a force, a speed, a relative position, or the like.

In this regard, an exemplary physical parameter can be the force acting between related content (or icons) and targeted content (or icons), such as tensile force and attractive force. Such force is generated by computer according to the following relationships:


{right arrow over (F)}=ki({right arrow over (x)}i−{right arrow over (x)}T)

    • {right arrow over (F)}: Force acting between related content/icon i and targeted content/icon T
    • ki: Parameter depending on relatedness between related content/icon i and targeted content/icon T (ki>0)
    • {right arrow over (x)}i: Position of related content/icon i
    • {right arrow over (x)}T: Position of targeted content/icon T

Alternatively, the physical parameter can be a speed parameter, representing, for example, the speed with which related content (or icons) are attracted to the target content (or target icon). Such speed is generated by computer according to the following relationships:

x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - x T ( t - Δ t ) l i

    • {right arrow over (x)}i(t): Position of related content/icon i at time t
    • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
    • li: Parameter depending on relatedness between related content/icon i and targeted content/icon T (li>1)

Alternatively, the physical parameter can be a position parameter, representing, for example, the final position of related content (or icon) relative to the targeted content (or icon). Such relative position is generated by computer according to the following relationships:

Final relative position of related content/icon i, {right arrow over (r)}i={right arrow over (x)}i,FINAL−{right arrow over (x)}T,FINAL, is set depending on relatedness between related content/icon i and targeted content/icon T. For example, related contents are assigned at the final time in decreasing order of relatedness, most closely-related first, as below:

Here, in terms of speed with which related content/icon comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i, it becomes like the following;

x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - { x T ( t - Δ t ) + r i } l

    • {right arrow over (x)}i(t): Position of related content/icon i at time t
    • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
    • l: Constant parameter (l>1)

If desired, the physical parameter can be comprised of combinations of parameters, such as combinations of a speed parameter (as described above) and a relative position parameter (as described above). In this regard, the following considerations would be applicable:

Related content/icon i comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i ({right arrow over (r)}i is set depending on relatedness between related content/icon i and targeted content/icon T) and the force acting between the related content/icon i and the position of {right arrow over (x)}T+{right arrow over (r)}i becomes like the following;


{right arrow over (F)}=ki({right arrow over (x)}i−({right arrow over (x)}T+{right arrow over (r)}i))

    • ki: Parameter depending on relatedness between related content/icon i and targeted content/icon T (ki>0)

Alternatively, if desired, the physical parameter can be comprised of other combinations, such as combinations of speed and relative position. In this regard, the following considerations would be applicable:

Related content/icon i comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i ({right arrow over (r)}i is set depending on relatedness between related content/icon i and targeted content/icon T) and the speede with which related content/icon comes close to the position of {right arrow over (x)}T+{right arrow over (r)}i becomes like the following;

x i ( t ) = x i ( t - Δ t ) + x i ( t - Δ t ) - { x T ( t - Δ t ) + r i } l i

    • {right arrow over (x)}i(t): Position of related content/icon i at time t
    • {right arrow over (x)}T(t): Position of targeted content/icon T at time t
    • li: Parameter depending on relatedness between related content/icon i and targeted content/icon T (li>1)

It will of course be understood that other implementations, using other physical parameters may also be used. Thus the above examples of physical parameters are not intended as limiting examples.

To understand how the apparatus and methods for associating relatedness with physical parameter(s) described herein may be used, an information appliance will be featured. Again, it will be understood that this information appliance is merely an example of a device that may use the teachings herein.

Referring to FIG. 1, an exemplary information appliance has been illustrated at 20. The information appliance has a touch-enabled display screen 22 upon which the touch-enabled user interface is displayed. In the illustrated example of FIG. 1, the displayed user interface comprises a plurality of different regions that the user can interact with using touch gestures. While a touch-enabled information appliance is shown here to illustrate the principles of the invention, it will be understood that other types of devices and other types of user interfaces, supporting other types of user interaction are possible. Thus, for example a computer device having a mouse or stylus driven interface could also be used.

For purposes of illustrating some of the principles of the invention, the depicted information appliance is adapted to manage image content, such as photo library content. It will be understood, however, that the principles of the invention can be applied to other types of content, such as video content, textual content, hypertext content, database content and the like. Thus where photographic images and thumbnail images or icons are described below, the reader will understand that the displayed images could represent a different type of content, such as video content, textual content, hypertext content, database content and the like.

As illustrated in FIG. 2, the information appliance is preferably implemented using a computer architecture that includes a central processing unit or CPU 26 coupled to a bus 28, to which random access memory 30 and storage memory 32 are also attached. The computer architecture may also include an input/output (I/O) module attached to bus 28 to facilitate communication with external devices via any suitable means such as wired connection or wireless connection. A display driver 36 is coupled to the bus 28 to support the touch display 22. To simplify the illustration, the display driver 36 of FIG. 2 includes the necessary circuitry to drive the visual display and to receive the touch input commands produced when the user performs a touch gesture upon the touch display. In this regard, as illustrated in FIG. 1, the user performs a touch and drag operation to effect selection of related content, as will be more fully discussed herein.

Referring to FIG. 3, the user interface displayed in FIG. 1 is now shown in greater detail. In the exemplary application illustrated in FIG. 1, the information appliance organizes and displays image content, such as photographs within a user's personal photo collection. The managed content is preferably organized into different classes or groups using automatic categorization technology. In the user interface of FIG. 3, the category groups are displayed graphically as thumbnail depictions or icons within the predefined region of the display screen at 40. The user can select one of the applicable categories by suitable touch gesture. In FIG. 3, the category designated at 42 has been so selected.

Once the user selects a category, the user interface then displays individual thumbnail or icon representations of individual pieces of content belonging to that category. These are displayed in a grid as at 44. The user can then select from that grid one or more individual pieces of content by suitable touch selection. By way of example, in FIG. 3 the user has selected the content at 46. Once selected, the user interface displays an enlarged view of the selected contents within window 48.

In some instances, the displayed content may, itself, comprise identifiable sub-components. For example, the displayed content may include several individually identifiable objects, such as buildings, geographic features, animals, human faces and the like. In FIG. 3, the displayed content within window 48 comprises a photograph featuring three persons' faces. If desired, these identifiable sub-components can be used to define a query by which the system searches for additional related content.

Thus, for example, by selecting one of the displayed persons' faces via touch gesture, the system uses the selected person to initiate a query to retrieve other related content (e.g., other images in which that person appears). The system will performs the query against all associated content, such as all content within the selected category group, to generate similarity scores associated with each element of the category group. Based on the results of the recognition algorithms, the images are each given their own similarity score. Images in which the selected person appears are given a high similarity score, whereas images lacking that person are given a low similarity score.

Of course, it will be understood that the specific similarity matching algorithms used will depend on the type of content being managed. In the case of image content such as photographic content and video content, face and object recognition algorithms as well as image characteristic extraction techniques would be used. In other applications, such as database applications, other database query techniques would be used.

The user interface further includes a family space region 50 into which the user can drag selected content that he or she desires to be associated into a subset or family of the category. As will be more fully explained below, one aspect of the present technology is to provide an easy to use and intuitive way of extracting related content for placement into the family.

Referring to FIGS. 4a and 4b, the user extracts content for inclusion in the family by a dragging operation whereby the user selects a target content from grid 44 (e.g., target 46a) and drags that content to location 46b which lies outside the confines of grid 44.

More specifically, when the user selects a target content, such as target content 46a in FIG. 4a, those additional pieces of content that are related to content 46a (by virtue of the automatic categorization technique being used) are highlighted as illustrated. Related content which have a high similarity score or high relationship score are preferably depicted in a more prominent fashion, such as by highlighting those pieces of content in a visually perceptible manner and also by displaying connecting lines that also connote a strong connection or relationship. In FIG. 4a, the strongly related content are shown at 52a, 54a and 56a. Additional content with a lower degree of relatedness are graphically depicted in a different way to connote the lower degree of relatedness. This may include shading or highlighting the related content in a more subdued fashion and also by generating connecting lines that are less prominent that those used to convey strongly related content. In FIG. 4a, these lesser related content are shown at 58a, 60a and 62a.

The display of related information in this fashion can support multiple levels of relatedness. Thus in FIG. 4a, additional content at 64a and 66a are illustrated with light shading, to convey a degree of relationship with target 46a that is less than any of the other related pieces of content. In addition to using light or light highlighting the connecting lines may also be rendered using a lighter shade to convey a less prominent or less bold relationship.

As illustrated in FIG. 4b, when the user drags the target content away from its resting place as depicted in FIG. 4a, the related content follows the motion trajectory 70 of the target content. Thus, in FIG. 4b, the target content is shown beyond the confines of grid 44 as at 46b. Note how the related content have followed the target content.

When the target content is moved via the dragging gesture, the associated content generally follows the same trajectory 70 as the target content. In one preferred embodiment, the associated content become spatially reorganized while following the trajectory 70, so that the associated content with a higher degree of relatedness becomes arranged closer to the target content 46b than the related content having a weaker relationship. This has been illustrated in FIG. 4b.

In one preferred embodiment, each related piece of content “follows” the target content as if it were attached by an invisible spring having a spring force that is proportional to the degree of the relationship. Thus, closely related content, such as content items 52b, 54b and 56b are pulled toward the target content 46b by an invisible spring force that is stronger than the spring force that pulls less related content, such as content items 58b, 60b, 62b and so forth. Thus, whereas the initial positions of the pieces of content are distributed according to the grid 44, as illustrated in FIG. 4a, the individual pieces of content are reordered according to the degree of relationship (strength of relationship) as the target content is moved by the user as illustrated in FIG. 4b.

To enhance the visual effect, the attractive force (invisible spring force) may be buffered or tempered by introducing a velocity-sensitive component that resists the attractive spring force. This velocity-sensitive component may be modeled as if each interconnecting link between target and related component includes a velocity-sensitive “dashpot.” Employing both a spring force and a retarding velocity-sensitive force, the force acting upon each item of related content may be expressed as F=k dx/dt−c dv/dt.

The effect of the velocity-sensitive component is to make the movement of individual content items somewhat sluggish, so that the motion and response to the invisible spring force is not instantaneous. An alternate way of expressing the relationship would be to think of the items of content as moving through a viscous medium, so that changes in the position of the target content 46b are not instantaneously mimicked by a comparable instantaneous change in the position of all related content. Rather, the related content will continue to coast to their new positions for a short time after the target content has already stopped.

The visual effect produced by the velocity-sensitive component is to slow down the motion of the content following the target content, so that the user is able to see the strongly related content outpace the less related content as each moves to its final clustered position. Because the invisible spring forces attracting each piece of content to the target content depend on the individual relationship strength, the more strongly related items are attracted more quickly and thus tend to situate themselves most closely to the target content when the target content finally comes to rest.

In another embodiment, the related content items follow the target content in a more complex kinematic relationship whereby the overall number of related items attracted during motion of the target content can be controlled by how quickly the user moves the target content. In this embodiment, if the user moves the target content 46b slowly, then even weakly related content will follow the trajectory 70. On the other hand, if the user moves the target content 46b quickly, then only target content above a certain threshold will follow. The effect is as if the weaker interconnecting links (carrying the invisible spring force) can be broken if the speed of movement of the target content exceeds a certain threshold. As will be more fully explained, the threshold may be velocity dependent, so that the user can actually control how many items of related content are pulled away from the grid 44 by simply controlling how quickly he or she moves the target content.

In yet another embodiment, depicted in FIG. 5, a gesture-operated control 72 is provided to set the threshold and thus control to what extent the degrees of relationship or layers of linkages with the target content are attracted as the target is moved. The user rotates the control 72 in a clockwise direction to increase the number of related content items and counterclockwise to decrease the related number of items.

Referring now to FIG. 6, the computer programming used to implement embodiments of the disclosed system and method will now be discussed. Specifically, FIG. 6 shows the software components and manner of programming (CPU 26, FIG. 2) to effect the content categorizing, selecting and graphical display to effect the following motion trajectory as discussed above. The software components may be loaded into memory 30 (FIG. 2) and are then acted upon by CPU 26 to produce the above-described behaviors when the computer program is run. If desired, these components can be incorporated into or associated with the operating system of the information appliance 20 of FIG. 1.

For purposes of illustrating the principles of the invention, certain ones of the provided software modules are specifically adapted for handling visual image processing, such as face recognition and object recognition. Other components are more general in nature and are adapted for extracting features of any creative find description, which can include not only features extracted from visual content (photographs, motion pictures, and the like) but also other data types as may be applicable to more general purpose data mining applications.

As diagrammatically depicted at 100, the computer program and software modules used to implement the functionality described above may comprise a functional block 100 performs the content categorization and presentation through the graphically user interface of the information appliance. This functional block 100 comprises a category reorganization user interface 102 that in turn employs several software components. As illustrated, one of the basic functions of the category reorganization user interface is categorization of the content. Thus, one of the illustrated functions of the category reorganization interface is the function of general categorizing 104. Depending on the application involved, this general categorizing can involve certain additional sub-categorizing aspects. Illustrated here are four such aspects, namely face recognizing 106, object recognizing 108, feature extracting 110 and content tagging 112. These categorizing software modules work as follows.

When the content is a photograph, for example, the face recognizing module 106 identifies regions within the image that represent faces using a suitable face recognition algorithm that analyzes the image to detect features corresponding to a subject's face, such as eyes, nose, cheekbones, jaw, etc.

The object recognizing module 108, like the face recognizing module 106, performs feature identification. However, whereas the face recognizing module is specifically designed to recognize features found in the human face, the object recognizing module is more general and is thus able to recognize objects such as buildings, geographic features, furniture and the like. Both the face recognizing module and the object recognizing module may be implemented using a trained system that is capable of learning by extracting features from known faces and known objects. Both face recognizing module 106 and object recognizing module 108 thus rely upon the general feature extracting capabilities of the feature extracting module 110.

In some instances, the user may have applied tags to certain content and also to certain features found within that content. The content tagging module 112 administers this functionality. Thus, for example, if the person identifies a certain face as belonging to his or her daughter, the portions of the image corresponding to the daughter's face may be tagged with her name. Whereas, the feature extracting techniques operate upon elements that are inherent to the image itself, content tagging involves additional metadata that is added by the user or is added as a result of some query or grouping having been performed.

In use, when the user interacts with the information appliance through the touch display 22 (FIG. 2), the general categorizing module 104 and its associated sub-modules 106, 108, 110 and 112, are called into action when needed to organize the content into different categories. With reference to FIG. 3, such categories may be displayed in category groups as in 40.

The category reorganization user interface module 102 further includes software modules that handle the user interaction of selecting a target content within grid 44 (FIG. 3), defining relationships between the target content and other content as well as handling all of the connecting wire visualization and following motion processing as was described in connection with FIGS. 4a, 4b and 5.

Thus, the category reorganization user interface includes a selected content position determining module 114, which functions to interpret which target content has been selected by the user when the user touches one of the content items within grid 44 (FIG. 3). The content relationship analyzing module 116 works in conjunction with module 114, as well as the general categorizing modules, to determine which additional pieces of content are related to the one the user has selected. This determination includes associating a relatedness score (or correlation metric) to each piece of content that is related to the target content selected. In this regard, a numerical score may be assigned to the relationship. For example, a relatedness score of 0-100% may be assigned. A 100% relationship would denote a very strong relationship to the target content, whereas a 0% score would denote the absence of a relationship. Thus, relationships between the target content and the remaining content can vary over a suitable range as required by the data being analyzed.

Once the content relationship analyzing module 116 has performed its function, the connecting wire visualizing module 118 generates connecting wires or lines between the target content and the related content. As discussed and illustrated in connection with FIGS. 4a, 4b and 5, the connecting wires may be visually depicted using different boldness or intensity values to denote different degrees of relatedness. For example, content items having a relatedness score of 75%-100% would be given a strong bold appearance, scores between 50% and 74% would be given a less bold appearance, scores between 25% and 49% would be depicted using a light line or dotted line, and so forth. Depending on the application, content items having a similarity score below a certain threshold, such as below 25%, may be given no connecting wire visualization and would thus be considered “not related”. As an alternative to controlling the boldness or intensity of the connecting wires, different colors may be used to indicate different levels of relatedness.

As discussed in connection with FIGS. 4a and 4b, the category reorganization user interface produces a user-friendly visualization whereby related content follow the trajectory of the target content as the user moves the target content from the grid region 44 to the family space region 50 (FIG. 3). As discussed, the individual items of related content are treated as if they are connected by an invisible spring which produces a pulling force causing related content to follow the target content as the user moves it. This pulling force or tensile force is calculated in module 120. Further details of this calculation will be discussed below.

The tensile force or spring force is used by the following motion processing module 122, which associates a motion trajectory with each of the pieces of related content. To give the visual display a user-friendly, natural presentation, the following motion processing module 122 causes each of the related content to follow a trajectory generally in the direction of the target content whereby the pulling force acting on each item of content is equal to the tensile force associated with that piece of content.

If desired, a velocity-sensitive motion-resisting counterforce or dashpot may be associated with each piece of related content to give the effect that the related content moves toward the target content through a viscous medium so that the related content items reach their final destination after the target content has ceased to move. The produced visual effect makes it appear that the related content are being pulled by elastic strings that stretch when the target content is moved and that continue to pull the associated content towards the target content, through a viscous medium, after the target content has come to rest.

Because closely related content is attracted more strongly (stronger tensile force) to the target content, such associated content will naturally cluster more closely to the target content than less strongly related content. The category reorganization user interface module 102 must therefore organize the relocated content items after motion is effected; otherwise, the related content may overlap and be difficult to visualize. To handle this, the related content position determining module 124 defines a boundary about each piece of content and applies a rule dictating that the related content will be positioned radially adjacent the target content based on relatedness score, with the further provision that the related content items shall be repositioned so that the individual content items do not overlap on another.

Finally, having assembled a cluster of related content, the category reorganization user interface, through its made category reorganizing module 126, associates recognition information with the newly formed cluster. This allows the cluster to be tagged and saved for recall at a later time and also to be used as a starting point for performing further content recognition. For example, the user might select a first category group, such as group 42 (FIG. 3) and then perform the above-described selection operation to assemble a cluster of related content items. The user could then save that assembled cluster and then use it as a basis for searching through a different category group selected from the category groups for FIG. 3.

For a better understanding of the software modules, refer to FIG. 7, where the process flow implemented by modules 118, 120, 122 and 124 has been illustrated. This process flow represents one presently preferred method for remapping the target content and associated content to a new category, as described above. Thus, at step 150 the process determines the selected content position sequentially. This step is performed by module 114 (FIG. 6) when the user selects a target content. Upon selection of a target, module 124 identifies the related content at step 152 and then further ascertains the current position of the related content within the grid 44 (see, for example, FIG. 4a). In this presently preferred embodiment, the individual pieces of selected content are processed sequentially. That is, the process depicted in FIG. 7 is implemented as a loop whereby each item of content is sequentially processed. However, due to the speed of the CPU, the user perceives the individual content items as moving simultaneously as the user drags the target content towards the family space region 50.

At step 154, the system calculates the tensile force (invisible spring force) sequentially for each item of content. In this presently preferred embodiment, the tensile force can be modeled as a spring force according to the formula F=kx, where k is proportional to the degree of relationship between that content and the target content. While a linear relationship is presently preferred, non-linear relationships may be used instead to achieve a different attractive force profile between the target content and the related content.

In accordance with the linear relationship F=kx, when the displacement (x) between the target content and the related content changes upon movement of the target content by the user, the tensile force becomes non-zero and may be calculated by the stated formula. As noted, in this preferred embodiment, each item of content is treated individually and each item may have its own tensile force value, depending on the particular degree of relatedness.

Having calculated the tensile force for the given related content, at step 156 a motion calculation is performed to determine how the related content will move as the target content is moved by the user. Using a physical object analogy, motion of the related content can be calculated using the equation F=ma, where m is a standardized mass (which can be the same value for all pieces of content) and a is the acceleration produced by the force F. Because the mass of all content may be treated as equal, it is seen that the applied force (the tensile force for that piece of content) is proportional to the acceleration produced.

Thus, the motion process determines an acceleration value for each piece of related content. This acceleration value is then used to calculate the motion that the related content will exhibit. Such motion is, of course, a vector quantity. That is, motion of the related content proceeds in a certain direction as dictated by the following motion model implemented by the module 122. In this presently preferred embodiment, the motion model is based on an analogy whereby each item of related content is attracted to the target content by an invisible spring force (the tensile force) between them. Thus, the vector direction of motion of the related content is towards the center of the target content. Accordingly, as the user moves the target content, each item of related content will be attracted to and thus follow the trajectory of the target content, corresponding to step 158.

If desired, in order to give the visual appearance a more realistic “real world” feel, the following motion calculation can include a velocity-sensitive, dashpot, term that tends to resist instantaneous changes in motion, thereby making the related content appear to move as if they were immersed in a viscous medium. While not required, this additional velocity-sensitive term makes movement of the related content lag behind movement of the target content. Thus, when the user stops moving the target content, the related content will continue to coast toward their final destinations, the final destinations being determined at the points where the tensile force returns to zero or until further movement of the related content is blocked because another piece of content already occupies the space.

In addition to computing the motion of each piece of related content, the process also generates the connecting wires or lines between each item of related content and the target content. This is performed at step 160. Specifically, this step defines a line segment between the centers of the respective thumbnail images. As discussed above, the boldness or color of these line segments can be adjusted based on the degree of relatedness.

After all of the related content have been selected and moved, the collected set of content are then remapped into a new category at step 164. If desired, this step may include prompting the user to provide a category label that is then associated with the items of content. The system permits more than one category tag or label to be associated with each item of content. Thus individual items of content can belong to more than one category, if desired.

In one embodiment, the system includes a mechanism to allow the user to control how many items of related content are “attracted” to the target content during the selection and moving process. As described in connection with FIG. 5, this can be accomplished by providing a touch-enabled control wheel that the user can manipulate to adjust the threshold of which content will be captured and which will not. The control of FIG. 5 works as follows. The control 72 produces a numerical threshold value that changes over a range of values from a low value to a high value as the user rotates the control clockwise or counterclockwise. The value produced by control 72 is then used to set the threshold by which the system determines whether an item of content will be included or not. For example, if the control 72 is manipulated to a high threshold, then only content having a relatedness score of above 75% will be captured. Conversely, if the control is manipulated to a low value, then content having a relatedness score of above 25% will be captured.

In yet another embodiment, the user is able to control how much content is captured by the speed at which the user moves the target content. The embodiment models an object being pulled across a frictional surface so that the frictional force acts to oppose movement in the pulling direction. The line or wire representing the tensile force is fragile and can stretch and break if the pulling force becomes too great. Weakly associated content is mapped using a more fragile connection, resulting in weakly associated content not being selected when its connection breaks. The FIG. 8 illustrates how this may be accomplished.

Referring to FIG. 8, the system defines an artificial affinity space (not shown) whereby all items of content are related based on how strongly a target content (or icon) is related to the other content (or icons). The system then establishes a relationship between the affinity value and a physical parameter. For purposes of illustration, the relationship can be a kinematic parameter such as object weight, where content having a low affinity value (e.g., unrelated) is assigned a relatively heavier weight, whereas content having a high affinity value (e.g., highly related) is assigned a light weight. This object weight is then mapped to the display space 202, where displayed objects appear to move as the target content TD is pulled in a certain direction by the user. Objects having a mapped heavy weight will more slowly, or not at all, based on a predetermined threshold friction assigned to the “surface” upon which the displayed objects sit in display space. Conversely, objects with lighter weight will move more freely, following the general trajectory of the target content as it moves.

Alternatively, the affinity space can map the affinity value to a tensile force parameter. Weakly related (e.g., unrelated) content objects are assigned a weak tensile force, whereas strongly related content objects are assigned a stronger tensile force. This tensile force is then mapped to the display space 202 so that the more strongly related the content, the more strongly it is attracted to the target content as it is pulled by the user.

As yet another alternative, the affinity space can map the affinity value to a fragility value corresponding to how strong the affinity relationship is. When the fragility value is mapped to display space, objects are connected to the target object through a link (shown as a connecting line in display space) having a strength based on the fragility value. Links with low fragility value break as the target object is pulled. In this way, unrelated or weakly related content severs its relationship with the target content and does not follow the target content as the target is pulled by the user. For example, the link represented by force F4 may represent a comparatively fragile link, depicted by a thin connecting line. This fragile link will break based on how the target content is moved.

It should be understood that the embodiments depicted in FIG. 8 are intended to illustrate a few examples of how kinematic motion behavior can be mapped onto the otherwise unrelated problem of how to select and display related content. Other models can be used instead.

To further explain the relationship in FIG. 8, refer now to the flow diagram in FIG. 9. Beginning in display space 200, the user selects an object representing a target content at 206. Selection of this object causes the system in affinity space 202 to identify related objects at 208. The user then moves the selected object at 210 in display space. The speed at which the user moves the selected object is captured and used in affinity space at step 212 where the system determines which objects will follow for a given movement speed. Step 212 thus corresponds to the setting of the escape velocity threshold in FIG. 8. The system then in step 214 assigns a tensile force for each object, based on the mapped parameter (e.g., object weight, tensile force, link material, etc.). The assigned tensile forces Fn are then supplied back to display space where they are used to cause related objects to move using their respective assigned tensile forces at 216.

In the embodiment illustrated in FIGS. 4a, 4b and 8, each of the content elements were directly attracted to the target content. Variations of this basic concept are possible. Thus, as shown in FIGS. 14a, 14b and 15, content elements may be organized according to a tree structure. Thus certain elements are directly attracted to the target content, whereas other elements are attracted, as children, grandchildren, etc., of the directly attracted content.

Referring to FIGS. 14a and 14b, the user has selected element 46 (46a, 46b) as the target, moving it along the trajectory 70 as illustrated in FIG. 14b. With reference to FIG. 14a, note that elements 52a, 54a and 56a are directly linked as having a strong affinity with target content 46. These linked elements, in turn, have affinities for other elements thereby defining a parent-child-grandparent tree structure relationship. For example, element 52a has an affinity with element 58a, which, in turn, has an affinity with element 58a. Thus, when the target content 46b is moved along trajectory 70, as shown in FIG. 14b, the child content 58b and grandchild content 64b of element 52b are attracted as well.

FIG. 15 illustrates how the embodiment of FIGS. 14a and 14b operate. As shown in the display space 202 in FIG. 15, the items of content are capable of being attracted to one another (or pulled by one another) and thus captured by invisible tensile forces between one another. Thus, content element Am is attracted to element Ao by force F4; and element An is attracted to element Ao by force F5. In other words the attractive force is between elements that are closest in proximity to one another in display space and not necessarily directly attracted to the target content Td.

Computation of each individual force may be performed in this embodiment using essentially the same computational process as used to calculate the force for the embodiment of FIG. 8, with the exception that in FIG. 8 all forces are attracted to the same target TD, whereas in the present case of FIG. 15, the target for each content element is the parent of that element. Thus the force calculation can be performed recursively, following the tree structure.

If desired, a control mechanism 72 may be included with the embodiment of FIGS. 14a and 14b. This is illustrated in FIG. 16.

Example Use Cases

Having now explained the basic principles of the technology, some examples of the technology in use will be presented with reference to FIGS. 10a-10d, 11a-11d, 12a-12d and 13a-13d. Referring first to FIGS. 10a-10d, there is shown a basic screen transition example whereby the user selects content (FIG. 10a), drags the selected content thereby attracting related content (FIG. 10b), maps the related content into the Family Space region (FIG. 10c) and then associates a label [Fido] to the gathered content (FIG. 10d). The associated content having been labeled, is now available for display as a new category group.

FIGS. 11a-11d show a representative use case, similar to that of FIGS. 10a-10d, but where the user specifically selects the portion of the photograph in FIG. 11b depicting a dog. In other words, the user selects a portion of the image, such as the dog, and that portion is used as a basis for a relatedness query to find other images depicting that dog.

FIGS. 12a-12d illustrate a further use case, similar to that of FIGS. 11a-11d, but illustrating that the user can create new categories based on different selected content within a given image. Thus in FIG. 12a the user selects Mt. Fuji in the image and then pulls related images containing Mt. Fuji into the Family Space at FIG. 12b. Similarly, in FIG. 12c, the user starts with the same photograph as FIG. 12a, but this time selects the blooming cherry blossoms and uses that selected content to pull related images containing blooming cherry blossoms.

FIGS. 13a-13d illustrate how it is possible to create more complex categories, based on previously created categories. Thus as illustrated in FIG. 13b, the user selects one of the images within a previously defined category. The selected image is displayed as an enlarged image in the window to the left. Then, as depicted in FIG. 13c, the user selects one of the persons in the photograph and pulls a new category containing the selected person into a family space. Note that the family space now contains both the original category cluster and the newly created one. These two category clusters may then be joined to define a composite cluster if desired.

The foregoing description of the embodiments has been provided for purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure. Individual elements or features of a particular embodiment are generally not limited to that particular embodiment, but, where applicable, are interchangeable and can be used in a selected embodiment, even if not specifically shown or described. The same may also be varied in many ways. Such variations are not to be regarded as a departure from the disclosure, and all such modifications are intended to be included within the scope of the disclosure.

Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail.

Claims

1. A method comprising:

displaying items of content as individual graphical images organized according to a first spatial arrangement upon a display screen;
upon selection of a target content from the displayed items of content, employing a processor to identify items of related content in connection to the target content according to relatedness of each item of related content to the target content;
associating a physical parameter with each item of related content based on the relatedness thereof; and
upon movement of the target content, displaying the items of related content to move and follow the target content on the display screen, where each item of related content proceeds as if it were attracted to the target content by a force characterized by the physical parameter thereof.

2. The method of claim 1 wherein the step of displaying the items of related content employs a computer-generated following algorithm that simulates motion of an attractive force between an item of related content and the target content.

3. The method of claim 1 wherein the step of displaying the items of related content employs a computer-generated following algorithm that simulates a spring force that pulls the item of related content towards the target content.

4. The method claim 3 wherein the step of causing items of related content to move employs a computer-generated following algorithm that further simulates a frictional force that retards movement effected by said spring force.

5. The method of claim 1 further comprising using the processor to generate and display connecting lines between the target content and each item of related content, where the lines are rendered upon the electronic display screen using visually perceptible features to denote different degrees of relatedness, based on the relatedness score associated with each item of related content.

6. The method of claim 1 further including employing user-controllable means for defining a threshold below which items of content are deemed not related.

7. The method of claim 6 wherein said user-controllable means is a graphically displayed, processor generated control that the user manipulates to manually set the threshold.

8. The method of claim 6 wherein said user-controllable means is a processor generated threshold based upon the speed by which the user moves the target content.

9. The method of claim 1 further comprising responding to user entry of tag whereby the displayed items of related content are associated with said tag and organized as a category group

10. A method using a processor with associated electronic display screen to gather and organize items of content comprising:

displaying the items of content as individual graphical images organized according to a first spatial arrangement upon the electronic display screen;
responding to user selection of a target content, selected from said displayed items of content, by employing a processor to identify items of related content according to a predefined relatedness metric and associating a relatedness score with each item of related content;
associating a tensile force with each item of related content based upon that item's relatedness score;
responding to user movement of the target content by causing items of related content to move and follow the target content, where movement of each item of related content proceeds as if attracted to the target content by a force equal to that item's associated tensile force;
displaying the items of related content upon the electronic display screen as a spatial grouping around the target content whereby items of related content having a greater tensile force are positioned generally closer to the target content than related content having a lesser tensile force.

11. The method of claim 10 wherein the step of causing items of related content to move employs a computer-generated following algorithm that simulates motion of an attractive force between an item of related content and the target content.

12. The method of claim 10 wherein the step of causing items of related content to move employs a computer-generated following algorithm that simulates a spring force that pulls the item of related content towards the target content.

13. The method claim 12 wherein the step of causing items of related content to move employs a computer-generated following algorithm that further simulates a velocity-sensitive dashpot force that retards movement effected by said spring force.

14. The method of claim 10 further comprising using the processor to generate and display connecting lines between the target content and each item of related content, where the lines are rendered upon the electronic display screen using visually perceptible features to denote different degrees of relatedness, based on the relatedness score associated with each item of related content.

15. The method of claim 10 further including employing user-controllable means for defining a threshold below which items of content are deemed not related.

16. The method of claim 15 wherein said user-controllable means is a graphically displayed, processor generated control that the user manipulates to manually set the threshold.

17. The method of claim 15 wherein said user-controllable means is a processor generated threshold based upon the speed by which the user moves the target content.

18. The method of claim 10 further comprising responding to user entry of tag whereby the displayed items of related content are associated with said tag and organized as a category group

19. A method for categorizing content using a graphical user interface, comprising:

displaying a plurality of icons in a content selectable area of a display, each icon representing a selectable content object;
visually depicting movement of a selected icon from the content selection area to a grouping area of the display that is spatially distinct from the content selection area, where the selected icon was selected by a user from the plurality of icons;
calculating a correlation metric for each non-selected icon in the plurality of icons, where the correlation metric quantifies a correlation between the selected icon and the non-selected icon;
selecting a subset of the plurality of the icons having a correlation metric that exceeds a threshold;
visually depicting movement of icons in the subset of icons from the content selection area to the grouping area of the display, where movement of the icons in the subset of icons is coordinated with movement of the selected icon; and
providing the user with a perceptible indicator of the correlation metric for each of the icons in the subset of icons while visually depicting movement of the icons in the subset of icons.

20. The method of claim 19 wherein calculating a correlation metric further comprises determining a spatial distance on the display between the selected icon and each of the non-selected icons and calculating the correlation metric for each non-selected icon using the distance between the selected icon and the non-selected icon.

21. The method of claim 19 further comprises moving the subset of icons based on a tensile force between each of the subset and the selected icon, where the tensile force is a function of the correlation metric.

22. The method of claim 19 further comprises displaying a visible connection between the selected icon and each of the icons in the subset of icons while visually depicting movement of the icons in the subset of icons.

23. The method of claim 19 further comprises displaying a line from the selected icon and to each of the icons in the subset of icons, where at least one of width or brightness of the line to a given icon in the subset of icons is based on the correlation metric for the given icon, thereby providing the user with a perceptible indicator.

24. The method of claim 19 further comprises visually depicting movement of icons in the subset of icons as following the selected icon from the content selection area to the grouping area.

25. The method of claim 24 further comprises setting velocity at which a given icon in the subset of icons moves based on the correlation metric for the given icon, thereby providing the user with a perceptible indicator.

26. The method of claim 19 further comprises adjusting a value of the threshold in accordance with input from the user.

27. The method of claim 19 further comprises displaying a given icon in the subset of icons spatially in relation to the selected icon in the grouping area in accordance with the correlation metric for the given icon.

Patent History
Publication number: 20120272171
Type: Application
Filed: Apr 21, 2011
Publication Date: Oct 25, 2012
Applicant: PANASONIC CORPORATION (Osaka)
Inventors: Keiji Icho (Osaka), Ryouichi Kawanishi (Kyoto)
Application Number: 13/091,620
Classifications
Current U.S. Class: Instrumentation And Component Modeling (e.g., Interactive Control Panel, Virtual Device) (715/771)
International Classification: G06F 3/048 (20060101);