CONTEXT-BASED CONTENT RECOMMENDATIONS

Various implementations provide one or more recommendations for content, for example, to a user, based on one or more context categories. In one particular implementation, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. In various implementations, a user provides a selection for the one or more other context categories, the context category, and/or the one or more additional context categories.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. provisional application No. 61/707,077, filed Sep. 28, 2012, and titled “Context-based Content Recommendations”, the contents of which are hereby incorporated by reference herein for all purposes.

TECHNICAL FIELD

Implementations are described that relate to providing recommendations. Various particular implementations relate to providing context-based recommendations for various forms of content to be consumed by a user.

BACKGROUND

Home entertainment systems, including television and media centers, are converging with the Internet and providing access to a large number of available sources of content, such as video, movies, TV programs, music, etc. This expansion in the number of available sources necessitates a new strategy for navigating a media interface associated with such systems and making content recommendations and selections.

The large number of possible content sources creates an interface challenge that has not yet been successfully solved in the field of home media entertainment. This challenge involves successfully presenting users with a large number of elements (programs, sources, etc.) without the need to tediously navigate through multiple display pages or hierarchies of content.

Further, most existing search paradigms make an assumption that the user knows what they are looking for when they start, whereas often, an alternate mechanism is more desirable or appropriate. One approach for allowing a process of discovery and cross linkage is the use of ratings. Under this approach a user rates content and a recommendation engine recommends additional content related to the rated content. For example, if a user gives an action movie a five star rating and a horror movie a one star rating, a conventional recommendation engine is likely to recommend other action movies to the user rather than other horror movies. A drawback to this approach is that recommendations tend to be skewed to particular movie genres until a large enough rating database is created over multiple movie genres (for example, action, horror, romance, etc.) by the user. Furthermore, another drawback is that even if a large rating database is created by a user, there still may be inaccurate or non-relevant recommendations since the rating information may have been inaccurately collected from the user. For example, if a user rates the first five horror movies presented for rating as one star movie, the conventional recommendation engine may stop recommending horror movies to the user. However, the user may just not have liked the first five horror movies presented and may actually desire to have other horror movies brought to his or her attention.

SUMMARY

According to a general aspect, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category.

The details of one or more implementations are set forth in the accompanying drawings and the description below. Even if described in one particular manner, it should be clear that implementations may be configured or embodied in various manners. For example, an implementation may be performed as a method, or embodied as an apparatus, such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal. Other aspects and features will become apparent from the following detailed description considered in conjunction with the accompanying drawings and the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

FIG. 1 provides a block diagram depicting an implementation of a system for delivering video content.

FIG. 2 provides a block diagram depicting an implementation of a set-top box/digital video recorder (DVR).

FIG. 3 provides a pictorial representation of a perspective view of an implementation of a remote controller, tablet, and/or second screen device.

FIGS. 4-11 provide screen shots of an implementation for recommending content based on context.

FIGS. 12-15 provide flow diagrams of various process implementations for recommending content based on context.

FIG. 16 provides a flow diagram of an implementation of a system or apparatus for recommending content based on context.

DETAILED DESCRIPTION

The inventor has determined various manners in which, for example, a user interface to a content recommendation system can be more helpful. One implementation provides a movie recommendation and discovery engine that takes the user's context into account by getting information about the current day of the week, time of the day, audience or companion(s), and desired content (for example, movie) genre. Based on this information, the system recommends a set of movies that suits the given context. One or more implementations provide a way to take the user's context into account when recommending movies to watch.

The definition of context can vary depending on the content that is to be consumed. “Consuming” content has the well-known meaning of experiencing the content by, for example, watching or listening to the content. For different content, certain aspects of the context are more, or less, important. For example, activity and location are typically not as relevant when considering a movie to recommend as when considering music to recommend.

Various implementations build context categories dynamically and/or automatically, while other implementations rely more on manually built context categories. A context category refers generally to a set of values that can be selected as context. More particularly, a context category often represents a common variable, and includes a set of alternative values for that variable. For example, “Time” is a common variable, and “Friday Night”, and “Saturday Morning” belong to the “Time” category, and may be chosen as values for that variable. As another example, “Genre” is another common variable, and “Action” and “Drama” are possible alternative values for the “Genre” variable.

Dynamically building categories means that the categories are built based on the user input. Because the user input is dynamic, the building of the context categories is dynamic. For example, if a user selects “Friday Night with Friends”, the category “Genre” will be built algorithmically in runtime, based on those selections. “Building” the context category “Genre” refers to determining which values (elements) to include in the category “Genre”, and how to rank-order those values.

In various implementations, the context categories are built automatically. This means that there is, at least primarily, no user intervention, aside from providing the user input of, for example, day, time, and companions, in the creating of categories. Rather, in a pure automatic system, all of the decisions are made by algorithms, not by people. For example, recommending “Action” and “Drama” movies (this is building the context category of “Genre”) on “Friday Night with Friends” was not a decision made by humans directly. The decision is based on data (for example, from previous user studies) and algorithms.

Other implementations build context categories manually by, for example, paying specialists to decide that “On Friday Nights with Friends”, a user should be watching “Action” and “Drama”.

The terms “audience” and “companions” are generally used interchangeably in this application to refer to the set of people consuming the content. However, in other implementations, the terms “audience” and “companions” can refer to distinct context categories. Some values for this context category in various implementations discussed in this application may, strictly speaking, refer to companions and not include the user (for example, “Partner”). Other values may refer to the entire audience including the user or might not include the user (for example, “Family&Kids” or “Friends” may or may not include the user). However, in typical implementations, the content (for example, movie) recommendations are indeed based on the entire audience including the user, whether or not the value for the “audience” or “companions” context category specifically includes the user.

One or more implementations include two main parts: a movie selection system and a user interface. For one or more particular implementations, each will be described below, with reference to the figures. Variations of the movie selection and the user interface are contemplated.

Movie Selection

In this phase, we find which movies can be considered to be the best to watch in a given context. A variety of implementations exist, many of which are dynamic and/or automatic, in whole or in part. Various implementations use the following process:

1. We ask a group of people, given a certain context and a limited set of movies, to decide which movies they find appropriate for the given context. In one example, the given context is that it is Friday night, and the individual (each individual answers independently) is with his/her friends. The individuals in the group are each asked if they would watch, for example, the movie “The Dark Knight”. This provides, for example, a selection of movies for each of several different day/time contexts.

2. From the data acquired in step 1, we are provided data describing which specific movies people watch in a specific context (for example, a day/time/companions context). In order to make this information useful, and to help users navigate this information, we build upon this information. In various implementations, we build upon this information by aggregating the selected movies by their genres, and by performing statistical tests to identify which genres have a best average-rating for a certain context. For example, if a large part of the users said that they would watch “Inception” and “Signs” on a “Friday Night with their Partner”, then “Science Fiction” would be selected as a recommend genre for that context. Other implementations use a variety of tools and techniques to build upon the information from step 1.

3. In the previous step (step 2), we identified which genres are recommended for each context. In this step (step 3), we identify which movies we should recommend to the user (and the rank-order of those movies) for each combination of context and genre. Using the data from step 1, we gather the top-rated movies in the given context that belong to the desired genre (from step 2). If the resulting list is smaller than desired, we add to the list (bootstrap the list) by gathering additional movies that are the most similar to the ones already in the list. For example, if “Finding Nemo” is a good movie to watch in a given context, then it is likely that “Cars” and “Toy Story” will also be good movies (assuming, for example, that the genre is animation) for that context. A variety of tools and techniques are available for use in performing the gathering (bootstrapping) of additional movie titles for a given context and genre. Such tools and techniques include, for example, categorizations, ratings, and reviews of movies.

User Interface

The user interface of various implementations allows users to intuitively navigate through the results from the previous phase and to find movies that are recommended for their current context. One implementation of the process is as follows:

1. The system automatically recognizes the current day of the week and time of the day and presents a list of companions (Alone, Friends, Family&Kids, Partner) ordered by their expected frequency (most expected companions first) (see, for example, FIGS. 4 and 6). Users are able to change the day and/or time information (see, for example, FIG. 5).

2. After selecting the time (see, for example, FIG. 5) and the companion(s) (see, for example, FIG. 6), users can choose among the most recommended genres of movies for the given context (see, for example, FIGS. 7 and 8). Genres are ordered by their recommendation score (calculated, for example, in step 2 of the Movie Selection phase). Recall that in step 2 of the Movie Selection phase we identified the top genres, and because we are averaging the movies belonging to that genre, we have a score that we can use to sort the list.

3. After selecting the desired genre, a list of recommended movies is presented (determined, for example, in step 3 of the Movie Selection phase) (see, for example, FIGS. 9 and 11). Users can get more information about the movie, such as title, poster, description, genres, and links to external sources (see, for example, FIG. 10). Such external sources include, for example, IMDB (http://www.imdb.com/), Amazon (http://www.amazon.com/), Netflix (https://www.netflix.com/), and AllMovie (http://www.allmovie.com/).

4. At any time, users can navigate back and change their previous selections (see, for example, FIG. 11).

FIGS. 1-3 provide an implementation of a system and environment in which movie recommendations can be provided. Other systems and environments are envisioned, and the examples associated with FIGS. 1-3 are not intended to be exhaustive or restrictive.

Referring to FIG. 1, a block diagram of an embodiment of a system 100 for delivering content to a home or end user is shown. The content originates from a content source 102, such as a movie studio or production house. The content may be supplied in at least one of two forms. One form may be a broadcast form of content. The broadcast content is provided to the broadcast affiliate manager 104, which is typically a national broadcast service, such as the American Broadcasting Company (ABC), National Broadcasting Company (NBC), Columbia Broadcasting System (CBS), etc. The broadcast affiliate manager may collect and store the content, and may schedule delivery of the content over a delivery network, shown as delivery network 1 (106). Delivery network 1 (106) may include satellite link transmission from a national center to one or more regional or local centers. Delivery network 1 (106) may also include local content delivery using local delivery systems such as over the air broadcast, satellite broadcast, or cable broadcast. The locally delivered content is provided to a receiving device 108 in a user's home, where the content will subsequently be searched by the user. It is to be appreciated that the receiving device 108 can take many forms and may be embodied as a set top box/digital video recorder (DVR), a gateway, a modem, etc. Further, the receiving device 108 may act as entry point, or gateway, for a home network system that includes additional devices configured as either client or peer devices in the home network.

A second form of content is referred to as special content. Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager, for example, movies, video games, or other video elements. The special content can originate from the same, or from a different, content source (for example, content source 102) as the broadcast content provided to the broadcast affiliate manager 104. In many cases, the special content may be content requested by the user. The special content may be delivered to a content manager 110. The content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. The content manager 110 may also incorporate Internet content into the delivery system. The content manager 110 may deliver the content to the user's receiving device 108 over a separate delivery network, delivery network 2 (112). Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (112) and content from the content manager 110 may be delivered using all or parts of delivery network 1 (106). In addition, the user may also obtain content directly from the Internet via delivery network 2 (112) without necessarily having the content managed by the content manager 110.

Several adaptations for utilizing the separately delivered content may be possible. In one possible approach, the special content is provided as an augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc. In another embodiment, the special content may completely replace some programming content provided as broadcast content. Finally, the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize. For instance, the special content may be a library of movies that are not yet available as broadcast content.

The receiving device 108 may receive different types of content from one or both of delivery network 1 and delivery network 2. The receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands. The receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to FIG. 2. The processed content (at least for video content) is provided to a display device 114. The display device 114 may be a conventional 2-D type display or may alternatively be an advanced 3-D display.

The receiving device 108 may also be interfaced to a second screen such as a touch screen control device 116. The touch screen control device 116 may be adapted to provide user control for the receiving device 108 and/or the display device 114. The touch screen control device 116 may also be capable of displaying video content. The video content may be graphics entries, such as user interface entries (as discussed below), or may be a portion of the video content that is delivered to the display device 114. The touch screen control device 116 may interface to receiving device 108 using any well known signal transmission system, such as infra-red (IR) or radio frequency (RF) communications and may include standard protocols such as infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, or any proprietary protocol. Operations of touch screen control device 116 will be described in further detail below.

In the example of FIG. 1, the system 100 also includes a back end server 118 and a usage database 120. The back end server 118 includes a personalization engine that analyzes the usage habits of a user and makes recommendations based on those usage habits. The usage database 120 is where the usage habits for a user are stored. In some cases, the usage database 120 may be part of the back end server 118. In the present example, the back end server 118 (as well as the usage database 120) is connected to the system 100 and accessed through the delivery network 2 (112).

Referring to FIG. 2, a block diagram of an embodiment of a receiving device 200 is shown. Receiving device 200 may operate similar to the receiving device described in FIG. 1 and may be included, for example, as part of a gateway device, modem, set-top box, or other similar communications device. The device 200 shown may also be incorporated into other systems including an audio device or a display device. In either case, several components necessary for complete operation of the system are not shown in the interest of conciseness, as they are well known to those skilled in the art.

In the device 200 shown in FIG. 2, the content is received by an input signal receiver 202. The input signal receiver 202 may be one of several known receiver circuits used for receiving, demodulation, and decoding signals provided over one of the several possible networks including over the air, cable, satellite, Ethernet, fiber, and phone line networks. The desired input signal may be selected and retrieved by the input signal receiver 202 based on user input provided through a control interface or touch panel interface 222. Touch panel interface 222 may include an interface for a touch screen device. Touch panel interface 222 may also be adapted to interface to a cellular phone, a tablet, a mouse, a high end remote or the like.

The decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier. Alternatively, the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio interface may also include amplifiers for driving one more sets of speakers. The audio processor 206 also performs any necessary conversion for the storage of the audio signals.

The video output from the input stream processor 204 is provided to a video processor 210. The video signal may be one of several formats. The video processor 210 provides, as necessary a conversion of the video content, based on the input signal format. The video processor 210 also performs any necessary conversion for the storage of the video signals.

A storage device 212 stores audio and video content received at the input. The storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, for example, navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216 and/or touch panel interface 222. The storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or may be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.

The converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218. The display interface 218 further provides the display signal to a display device of the type described above. The display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as HDMI. It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results (for example, as described in more detail below with respect to FIGS. 4-11).

The controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, the touch panel interface 222, and the user interface 216. The controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. The controller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, the controller 214 performs searching of content and the creation and adjusting of the displays representing the context and/or the content, for example, as described below with respect to FIGS. 4-11.

The controller 214 is further coupled to control memory 220 (for example, volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214. Control memory 220 may store instructions for controller 214. Control memory may also store a database of elements, such as graphic elements representing context values or content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface for display interface 218, and the like. Alternatively, the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below. Further, the implementation of the control memory 220 may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.

Referring to FIG. 3, the user interface process of various implementations employs an input device that can be used to provide input, including, for example, selection of day, time, audience, and/or genre. To allow for this, a tablet or touch panel device 300 (which is, for example, the same as the touch screen control device 116 shown in FIG. 1 and/or is an integrated example of receiving device 108 and touch screen control device 116) may be interfaced via the user interface 216 and/or touch panel interface 222 of the receiving device 200. The touch panel device 300 allows operation of the receiving device or set top box based on hand movements, or gestures, and actions translated through the panel into commands for the set top box or other control device.

In one embodiment, the touch panel device 300 may simply serve as a navigational tool to navigate the display (for example, a navigational tool to navigate a display of context options and movie recommendations that is displayed on a TV). In other embodiments, the touch panel device 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the display of content. The touch panel device 300 may be included as part of a remote control device containing more conventional control functions such as activator and/or actuator buttons. The touch panel device 300 can also include at least one camera element. Note that various implementations employ a large screen TV for the display of, for example, context options and movie recommendations, and employ a user input device similar to a remote control to allow a user to navigate through the display.

Referring to FIG. 4, a screen shot 400 is shown. In the screen shot 400, the system has automatically detected a current day of the week 402 and a current time of day (or at least a current portion of the day, such as, for example, morning, afternoon, or evening) 404. The implementation then provides a rank-ordered list 410 of possible companions (that is, an audience for the content). The ordered list 410 of the screen shot 400 includes an “Alone” option 412 (a “face” icon), a “Family&Kids” option 414 (an icon with two people on the left and one person on the right), a “Partner” option 416 (an icon of two interlocked rings), and a “Friends” option 418 (an icon of two glasses toasting each other, with a star at the point of contact between the glasses). The companion list 410 is ordered (also referred to as sorted) by the expected frequencies of the possible companions 412-418 on the indicated day (Wednesday) 402 and at the indicated time 404 (afternoon). The companion list 410 is ordered from left to right in decreasing order of expected frequency. For example, on Wednesday afternoon, the system believes that it is most likely that the user will be watching the movie alone 412. The next most likely companions, in order from greatest likelihood to least likelihood, are “Family&Kids” 414, “Partner” 416, and “Friends” 418. As with other recommendations and ordering, the frequency or likelihood can be based on, for example, observed habits of the user that have been tracked, on a profile provided by the user, and/or on objective information provided for other users or groups of people.

Referring to FIG. 5, a screen shot is shown in which the user can change the indicated day and/or time. The prompt can be provided automatically or in response to a number of actions, including, for example, (i) the user hovering over the day and/or time fields with an input device such as, for example, a mouse or a finger, and/or (ii) the user selecting the day and/or time fields with an input device.

The screen shot of FIG. 5 includes a window 510 overlaying the screen shot 400. In various implementations, the overlaid screen shot 400 that is layered under the window 510 is shaded. The window 510 includes an indicator of the selected day 512 (shown as “Friday”), an indicator of the selected time 514 (shown as “Night”), and various controls. Two controls are provided for setting the day 512, including a “+” icon 520 for incrementing the day (for example, from Friday to Saturday) and a “−” icon 521 for decrementing the day (for example, from Friday to Thursday). Two analogous controls are also provided for setting the time 514, including a “+” icon 530 for incrementing the time (for example, from Night to Morning to Afternoon to Evening) and a “−” icon 531 for decrementing the time (for example, from Night to Evening).

The window 510 also includes two operational buttons. A “Close” button 540 closes the window 510 which is analogous to exiting the window without changing anything, and a “Set” button 545 sets the system to the selected day 512 and the selected time 514.

Referring to FIG. 6, a screen shot 600 is shown that provides an ordered list 610 of audiences that is now based on the new selection of day 512 and time 514 from FIG. 5. As with the list 410, the list 610 is ordered from left to right in decreasing order of expected frequency. However, FIG. 6 provides a different ordering in the list 610 of audience than does the list 410 in FIG. 4. This is because the user does not have the same likelihood of watching movies with various companions on Wednesday Afternoon as on Friday Night. Specifically, on Friday night, the system believes that it is most likely that the user will be watching a movie with Friends 418. The remaining displayed options for audiences, in order from most likely to least likely, are “Partner” 416, “Alone” 412, and “Family&Kids” 414.

Other implementations have different audience options, not just ordering differences among the options, for different days/times. For example, other audience options include, in various implementations, “Movie Club”, “Church Group”, and “Work Friends”.

Referring to FIG. 7, a screen shot 700 is shown that presents an ordered list 710 of movie genres. The ordered list 710 is based on the selected context elements from, for example, FIGS. 4-6. The ordered list 710 includes a context set 720 displaying the selected context elements. The context set 720 includes a day/time element 722 and an audience element 724. The day/time element 722 is a generic element (a “clock” icon) that indicates that the day and time have been selected, but the day/time element 722 does not indicate what the selected day and the selected time are. The audience element 724, however, indicates that the selected audience is Friends. The audience element 724 provides the indication of the audience by using a smaller version of the toasting glasses icon that is used for the Friends option 418 from FIGS. 4 and 6.

In various implementations, the elements of the context set 720 present the name of the selection when a user “hovers” over the icon using, for example, a mouse or other pointing device. For example, when hovering over the clock icon, such implementations provide a small text box that displays the selected day/time, such as, for example, “Friday Night”. As another example, when hovering over the “Friends” icon, such implementations provide a small text box that displays the selected audience, which is “Friends” in this example.

The list 710 of movie genres includes four options for movie genres, listed in order (from left to right) of most likely to least likely. Those options are (i) a Thriller genre 732 (shown by an icon of a ticking bomb), (ii) a Crime genre 734 (shown by an icon of a rifle scope), (iii) a Science Fiction genre 736 (shown by an icon of an atom), and (iv) an Action genre 738 (shown by an icon of a curving highway). That is, the system believes that on Friday night, if the user is watching a movie with friends, then the most likely movie genres to be watched are, in decreasing order of likelihood, thriller, crime, science fiction, and action.

Referring to FIG. 8, a screen shot 800 is shown that presents a variation of FIG. 7. In FIG. 8, a different audience has been selected. In particular, “Partner” has been selected to replace “Friends”. The new audience selection is shown in a context set 820 that includes the generic day/time element 722 and an audience element 824. The audience element 824 indicates that the selected audience is Partner because the audience element 824 uses a smaller version of the interlocking rings icon used in the Partner option 416 from FIGS. 4 and 6.

The screen shot 800 includes a new ordered list 810 of movie genres that is based on the new audience that has been selected. The list 810 provides the following genres, in order from most likely to least likely: (i) the Science Fiction genre, (ii) a Fantasy genre 842 (shown by an icon of a magic wand with a star on top), (iii) a Comedy genre 844 (shown by an icon of a smiley face), and (iv) a Drama genre 846 (shown by an icon of a heartbeat as typically shown on a heart rate monitor used with an electrocardiogram). By comparing the list 710 with the list 810, it is clear that the system believes different movie genres are more, or less, likely to be selected by the different audiences. Indeed, the list 710 and the list 810 have different genres, and not just a different ordering of the same set of genres.

Referring to FIG. 9, a screen shot 900 is shown that presents movie recommendations for the selected context. The selected context is shown with a context set 920 that includes the generic day/time element 722, the audience element 824, and a genre element 926 which is a smaller version of the atom icon used to represent the Science Fiction genre 736.

As described earlier, various implementations display a text box with the name of a selected context element when a user hovers over that element in the context set 920. For example, when hovering over the genre element 926, such implementations provide a small text box that displays the selected genre, such as, for example, “Science Fiction”.

The screen shot 900 includes an ordered set 910 of eight movie recommendations, with the highest recommendation at the top-left, and the lowest recommendation at the bottom-right. The set 910 includes, from highest recommendation to lowest recommendation: (i) a first recommendation 931, which is “Inception”, (ii) a second recommendation 932, which is “Children of men”, (iii) a third recommendation 933, which is “signs”, (iv) a fourth recommendation 934, which is “Super 8”, (v) a fifth recommendation 935, which is “Déjà vu”, (vi) a sixth recommendation 936, which is “Moon”, (vii) a seventh recommendation 937, which is “Knowing”, and (viii) an eighth recommendation 938, which is “Happening”. The eight recommendations are the movies that the system has selected as being the most likely to be selected for viewing by the user in the selected context.

More, or fewer, recommendations can be provided in different implementations. Additionally, the movies can be presented in various orders, including for example, (i) ordered from highest to lowest recommendation from top to bottom and left to right, such that the highest recommendation is top-left (reference element 931) and the second highest recommendation is bottom-left (reference element 935), etc., (ii) ordered with the highest recommendations near the middle, (iii) ordered alphabetically, or (iv) randomly arranged. The screen shot 910 shows movie posters, however other implementations merely list the titles.

The user is able to select a movie from the screen shot 910. Upon selection, one or more of a variety of operations may occur, including, for example, playing the movie, receiving information about the movie, receiving a payment screen for paying for the movie, etc.

The user has other options in various implementations, besides selecting a displayed movie poster. For example, in certain implementations allow a user to remove movies from the list of recommendations using, for example, a close button associated with the movie's poster. In various of such implementations another movie is recommended and inserted as a replacement for the removed movie poster. Some implementations remember the user's selections and base future recommendations, in part, on these selections. Other implementations also allow more, or less, than eight movie posters to be displayed at a given time.

Referring to FIG. 10, a window 1000 is displayed after a user has selected the sixth movie recommendation 936 (the movie “Moon”) from the screen shot 910. In FIG. 10, the window 1000 is overlaying the screen shot 900. In various implementations, the overlaid screen shot 900 that is layered underneath the window 1000 is shaded.

In this implementation, information about the selected movie is provided to the user, as shown in the window 1000. The window 1000 includes: (i) the movie title and year of release 1010, (ii) the movie poster 1020, (iii) a summary 1030 of the movie, and (iv) a set 1040 of options for viewing the movie.

The set 1040 includes, in this implementation, four links to external sources of the selected movie “Moon”. The set 1040 includes (i) an AllMovie button 1042 to select AllMovie (http/www.allmovie.com/) as the external source, (ii) an IMDB button 1044 to select IMDB (http//www.imdb.com/) as the external source, (iii) an Amazon button 1046 to select Amazon (http://www.amazon.com/) as the external source, and (iv) a Netflix button 1048 to select Netflix (https://www.netflix.com/) as the external source.

A user is also able to navigate back to the selection screen of the screen shot 900. By selecting a part of the overlaid screen shot 900, in FIG. 10, the user is able to navigate back to the previous screen of FIG. 9.

Referring to FIG. 11, the screen shot 900 is shown again as a result of the user selecting (for example, clicking within) the overlaid screen shot 900 in FIG. 10. Recall that the screen shot 900 provided the science fiction movie recommendations. The context set 920 serves, in part, as a history of the user's selections. Each of the icons 722, 824, and 926 in the context set 920 of the top-left area of the screen shot 900 can be selected by the user to go back to a particular previous screen. This feature provides a jump-back feature that can span several screens. For example, selecting the audience element 824 in the context set 920 of FIG. 11 navigates back, for example, to the screen shot 600, which provides the audience recommendations.

FIG. 11 also includes a “Partner” word icon 1110 (also referred to as a text box) that is displayed, for example, when the user hovers over the audience element 824 of the context set 920. The audience element 824 is the “Partner” option, so the system provides a viewable name with the word icon 1110 as a guide to the user.

Referring to FIG. 12, a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations. FIG. 12 provides a process that includes providing a content recommendation based on context. The content is, in various implementations, one or more of movies, music, sitcoms, serial shows, sports games, documentaries, advertisements, and entertainment. Clearly, various of these categories can overlap and/or be hierarchically structured in different ways. For example, documentaries can be one genre of movies, and movies and sports games can be two genres of entertainment. Alternatively, documentaries, movies, and sports games can be three separate genres of entertainment.

Referring to FIG. 13, a one-block flow diagram is provided that describes a process for recommending content according to one or more implementations. FIG. 13 provides a process that includes providing a content recommendation based on one or more of the following context categories: the user, the day, the time, the audience (also referred to as companions), and/or the genre. Note that the genre is often dependent on the type of content (for example, movies) that is being recommended.

Referring to FIG. 14, a one-block flow diagram is provided that describes a process for providing selections for a context category based on other context categories. For example, selections for the context categories of audience and/or genre can be provided. Further, the selections can be determined and rank-ordered based on one or more of the user, the day, and/or the time. It should be clear that the process of FIG. 14 is integrated, in various implementations, into the processes of FIGS. 12 and 13.

Referring to FIG. 15, a process 1500 is provided. The process 1500 includes providing a set of options for a given context category, ordered based on a value for one or more other context categories (1510). In one particular implementation, this includes providing a user an ordered set of options for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. The operation 1510 is performed in various implementations using, for example, one of the screen shots from any of FIGS. 4 and 6-8. For example, FIG. 4 provides a list 410 based on the context for the day and time.

The process 1500 further includes providing a set of options for one or more additional context categories, ordered based on an option for the given context category (1520). Continuing with the example discussed above, in one particular implementation, the operation 1520 includes providing an ordered set of options for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. The operation 1520 is performed in various implementations using, for example, one of the screen shots from any of FIGS. 7-8.

Variations of the process 1500 further include receiving user input identifying one of the provided options for (i) the one or more other context categories, and/or (ii) the one or more additional context categories. This user input operation is performed in various implementations, for example, as discussed above in moving from FIG. 6 to FIG. 7 or 8.

The process 1500 can be performed using, for example, the structure provided in FIGS. 1-3. For example, the operations 1510 and 1520 can be performed using the receiving device 108 or the STB/DVR 200 to provide the sets of options on the display device 114, the touch screen control device 116, or the device of FIG. 3. Additionally, a user input operation can be performed using the touch screen control device 116 or the device of FIG. 3 to receive the user input.

Referring to FIG. 16, a system or apparatus 1600 is shown that includes three components, one of which is optional. FIG. 16 includes an optional user input device 1610, a presentation device 1620, and a processor 1630. In various implementations, these three components 1610-1630 are integrated into a single device, such as, for example, the device of FIG. 3. Particular implementations integrate these three components 1610-1630 in a tablet used as a second screen while watching television. In certain tablets, the user input device 1610 includes at least a touch-sensitive portion of a screen, the presentation device 1620 includes at least a presentation portion of the screen, and a processor 1630 is housed within the tablet to receive and interpret the user input, and to control the presentation device 1620.

In other implementations, however, FIG. 16 depicts a distributed system in which the processor 1630 is distinct from, and remotely located with respect to, one or more of the user input device 1610 and the presentation device 1620. For example, in one implementation, the user input device 1610 is a remote control that communicates with a set-top box, the presentation device 1620 is a TV controlled by the set-top box, and the processor 1630 is located in the set-top box.

In another distributed implementation, the presentation device 1620 and the user input device 1610 are integrated into a second screen such as, for example, a tablet. The processor 1630 is in a STB. The STB controls both the tablet and a primary screen TV. The tablet receives and displays screen shots from the STB, providing movie recommendations. The tablet accepts and transmits input from the user to the STB, in which the user interacts with the content on the screen shots. The STB does the processing for the movie recommendation system, although various implementations do have a processor in the tablet.

The processor 1630 of FIG. 16 is, for example, any of the options for a processor described throughout this application. The processor 1630 can also be, or include, for example, the processing components inherent in the devices shown or described with respect to FIGS. 1-3.

The presentation device 1620 is, for example, any device suitable for providing any of the sensory indications described throughout this application. Such devices include, for example, all user interface devices described throughout this application. Such devices also include, for example, the display components shown or described with respect to FIGS. 1-3.

The system/apparatus 1600 is used, in various implementations to perform one or more of the processes shown in FIGS. 12-15. For example, in one implementation of the process of FIG. 12, the processor 1630 provides a content recommendation, based on context, on the presentation device 1620. As another example, in one implementation of the process of FIG. 13, the processor 1630 provides a recommendation based on one or more of user, day, time audience/companions, or genre. As another example, in one implementation of the process of FIG. 14, the processor 1630 provides selections for audience/companions and/or genre that are ordered based on user, day, and/or time. Other implementations also combine one or more of the processes of FIGS. 12-14 using the system/apparatus 1600. As another example, in one implementation of the process of FIG. 15, the processor 1630 provides the two sets of options in the operations 1510 and 1520, and the user input device 1610 can receive the user input in those implementations that receive user input.

The system/apparatus 1600 is also used, in various implementations, to provide one or more of the screen shots of FIGS. 4-11. For example, in one implementation, the processor 1630 provides the screen shots of FIGS. 4-11 on the presentation device 1620, and receives user input the user input device 1610. In this implementation, the presentation device 1620 and the user input device 1610 are included in an integrated touch screen device, such as, for example, a tablet.

Various implementations of the system/apparatus 1600 include only the presentation device 1620 and the processor 1630, and do not include the user input device 1610. Such systems are able to make content recommendations on the presentation device 1620. Additionally, such implementations are able to access selections for context categories using one or more of, for example, (i) default values, (ii) values from profiles, and/or (iii) values accessed over a network.

Additional implementations provide a user with options for simultaneously selecting values for multiple context categories at the same time. For example, upon receiving user selection of time and day in FIG. 5, an implementation provides a user with rank-ordered options for both audience and genre. In one such implementation, a screen provides a first option that includes Friends and Thriller, and a second option that includes Partner and Science Fiction.

Various implementations discuss context. As previously discussed, context is indicated or described, for example, by context categories that describe an activity. Each activity (for example, consuming content such as a movie) can have its own context categories. One manner of determining context categories is to answer the common questions of “who”, “what”, “where”, “when”, “why”, and “how”. For example, if the activity is defined as consuming content, the common questions can result in a variety of context categories, as discussed below:

“Who” is consuming the content? For example, the audience is a context category. Additionally, or alternatively, separate context categories can be used for demographic information such as age, gender, occupation, education achieved, location of upbringing, and previously observed behavior for an individual in the audience.

“What” content is being consumed? For example, the genre of the content is a context category. Additionally, or alternatively, separate context categories can be used for the length of the content, and the maturity ranking of the content (for example, G, PG-13, or R).

“Where” is the content being consumed? For example, the location is a context category and can have values such as, for example, in a home, in an auditorium, in a vehicle such as a plane or car, in the Deep South, or in the North East. Additionally, or alternatively, separate context categories can be used for room characteristics (for example, living room, auditorium, or airplane cabin) and geographical location (for example, Deep South).

“When” is the content being consumed? For example, the day-and-time is a context category. Additionally, or alternatively, separate context categories can be used for the day, the time, the calendar season (winter, spring, summer, or fall), and the holiday season (for example, Christmas, Thanksgiving, or Fourth of July), as discussed further below.

“Why” is the content being consumed? For example, the occasion is a context category and can have values such as, for example, a wedding anniversary, a child's birthday party, or a multi-generational family reunion.

“How” is the content being consumed? For example, the medium being used is a context category and can have values such as, for example, a small screen, a large screen, a mobile device, a low-speed connection, a high-speed connection, or surround sound. Additionally, or alternatively, separate context categories can be used for screen size, connection speed, and sound quality.

Other manners of determining context categories may also be used.

Different implementations vary one or more of a number of features. Some of those features, and their variations, are described below:

Various implementations use different presentation devices. Such presentation devices include, for example, a television (“TV”) (with or without picture-in-picture (“PIP”) functionality), a computer display, a laptop display, a personal digital assistant (“PDA”) display, a cell phone display, and a tablet (for example, an iPad) display. The display devices are, in different implementations, either a primary or a secondary screen. Still other implementations use presentation devices that provide a different, or additional, sensory presentation. Display devices typically provide a visual presentation. However, other presentation devices provide, for example, (i) an auditory presentation using, for example, a speaker, or (ii) a haptic presentation using, for example, a vibration device that provides, for example, a particular vibratory pattern, or a device providing other haptic (touch-based) sensory indications.

Various implementations provide content recommendations based on other contextual information. One category of such information includes, for example, an emotional feeling of the user. For example, if the user is happy, sad, lonely, etc., the system can provide a different set of recommendations appropriate to the emotional state of the user. In one particular implementation, the system provides, based on, for example, user history or objective input from other users, a rank-ordered set of genres and/or content based on the day, the time, the audience, and the user's emotional state.

As discussed above, another example of additional contextual information is “season”. Certain implementations provide indicators of a calendar season that include “summer”, “fall”, “winter”, and “spring”. Certain other implementations provide indicators of a holiday season that include “Christmas”, “Thanksgiving”, “Halloween”, and “Valentine's Day”. Obviously, certain implementations include both categories and their related values. As can be expected, a rank-ordering of movie genres can be expected to change based on the season. Additionally, a rank-ordering of movies within a genre can be expected to change based on the season.

Various implementations, as should be clear from earlier statements, base genre recommendations and/or movie recommendations on contextual information that is different from that described in FIGS. 4-11.

Various implementations receive user input identifying a value, or a selection, for a particular context category. Other implementations access a selection, or input, in other manners. For example, certain implementations receive input from other members of an audience using, for example, any of a variety of “second screens” such as, for example, a tablet or a smartphone. As another example, certain implementations use default selections when no user input is available or received. As another example, certain implementations access use profiles, access databases from the Internet, or access other remote sources, for input or selections.

Various implementations describe receiving a single value or selection for a particular context category. For example, FIG. 6 anticipates receiving a single selection of audience, and FIG. 7 anticipates receiving a single selection of genre. Other implementations, however, accept or even expect multiple selections. For example, one implementation of FIG. 6 allows a user to select two audiences, and then provides a genre recommendation based on the combined audiences. Thus, if a user is going to watch a movie with her partner and some friends, the user could select both Friends 418 and Partner 416, and the system would recommend genres based on this combined audience.

This application provides multiple figures, including the block diagrams of FIGS. 1-3 and 16, the pictorial representations of FIGS. 4-11, and flow diagrams of FIGS. 12-15. Each of these figures provides disclosure for a variety of implementations.

For example, the block diagrams certainly describe an interconnection of functional blocks of an apparatus or system. However, it should also be clear that the block diagrams provide a description of a process flow. As an example, FIG. 1 also presents a flow diagram for performing the functions of the blocks of FIG. 1. For example, the block for the content source 102 also represents the operation of providing content, and the block for the broadcast affiliate manger 104 also represents the operation of receiving broadcast content and providing the content on a scheduled delivery to the delivery network 1 106. Other blocks of FIG. 1 are similarly interpreted in describing this flow process. Further, FIGS. 2-3 and 16 can also be interpreted in a similar fashion to describe respective flow processes.

For example, the flow diagrams certainly describe a flow process. However, it should also be clear that the flow diagrams provide an interconnection between functional blocks of a system or apparatus for performing the flow process. For example, reference element 1510 also represents a block for performing the function of providing a user an ordered set of options for a given context category. Other blocks of FIG. 15 are similarly interpreted in describing this system/apparatus. Further, FIGS. 12-14 can also be interpreted in a similar fashion to describe respective systems or apparatuses.

For example, the screen shots of FIGS. 4-11 certainly describe an output screen shown to a user. However, it should also be clear that the screen shots describe flow process for interacting with the user. For example, FIG. 4 also describes a process of presenting a user with time/day information 402 and 404, presenting the user with associated audience information 410, and providing the user with a mechanism for selecting one of the presented audience options 410. Further, FIGS. 5-11 can also be interpreted in a similar fashion to describe respective flow processes.

We have thus provided a number of implementations. Various implementations provide content recommendations based on context. Various other implementations also provide context selections that are ranked according to frequency or likelihood. Various other implementations provide content recommendations that are also ranked according to frequency or likelihood.

It should be noted, however, that variations of the described implementations, as well as additional applications, are contemplated and are considered to be within our disclosure. Additionally, features and aspects of described implementations may be adapted for other implementations.

Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation” of the present principles, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.

Additionally, this application or its claims may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.

Further, this application or its claims may refer to “accessing” various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.

It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C” and “at least one of A, B, or C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.

Various implementations refer to a set of options for a context category. A “set” can be represented in various manners, including, for example, in a list, or another visual representation.

Additionally, many implementations may be implemented in a processor, such as, for example, a post-processor or a pre-processor. The processors discussed in this application do, in various implementations, include multiple processors (sub-processors) that are collectively configured to perform, for example, a process, a function, or an operation. For example, the processor 1630, the audio processor 206, the video processor 210, and the input stream processor 204, as well as other processing components such as, for example, the controller 214, are, in various implementations, composed of multiple sub-processors that are collectively configured to perform the operations of that component.

The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, tablets, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.

Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor, a pre-processor, a video coder, a video decoder, a video codec, a web server, a television, a set-top box, a router, a gateway, a modem, a laptop, a personal computer, a tablet, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.

Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD”), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.

As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading syntax, or to carry as data the actual syntax-values generated using the syntax rules. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.

A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.

Claims

1. A method comprising:

providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.

2. The method of claim 1 further comprising:

providing one or more content recommendations to the user based on (i) the identification of the option for the context category, and (ii) an identification of an option from the provided options for the one or more additional context categories.

3. The method of claim 1 wherein:

the set of options for the context category is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the previously determined option for the one or more other context categories, and
the set of options for the one or more additional context categories is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the identification of the option for the context category.

4. The method of claim 1 wherein providing the one or more 30 content recommendations comprises providing the one or more content recommendations in an order reflecting likelihood of selection.

5. The method of claim 1 wherein at least one of the context category or the one or more additional context categories includes one or more of (i) day of the week for intended content consumption, (ii) time of the day for 5 intended content consumption, (iii) season for intended content consumption, (iv) emotional feeling of a user, (v) the intended audience that will be consuming the content, or (vi) the genre of the content.

6. The method of claim 1 wherein:

the context category includes the intended audience that will be consuming the content, and
the one or more additional context categories includes the genre of the content.

7. The method of claim 1 wherein:

the one or more context categories include one or more of (i) day of the week for intended content consumption, or (ii) time of the day for intended content consumption.

8. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.

9. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of extrapolations and/or machine learning applied to input from one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.

10. The method of claim 1 further comprising receiving a user input as the identification of the option for the context category.

11. The method of claim 2 further comprising receiving a user input as the identification of the option for the one or more additional context categories.

12. An apparatus configured to perform one or more of the methods of claim 1.

13. The apparatus of claim 12 comprising one or more processors collectively configured to perform one or more of the methods.

14. An apparatus comprising:

means for providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
means for providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.

15. An apparatus comprising:

a presentation device; and
a processor configured to provide on the presentation device an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories, wherein the processor is further configured to provide on the presentation device an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.

16. The apparatus of claim 15 further comprising a user input device configured to receive a user input as the identification of the option for the context category.

17. The apparatus of claim 16 wherein the presentation device and the user input device are integrated into a single unit.

18. An apparatus comprising one or more processors collectively configured to perform the following operations:

providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.

19. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform the following operations:

providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.

20. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform one or more of the methods of claim 1.

Patent History
Publication number: 20150249865
Type: Application
Filed: Dec 17, 2012
Publication Date: Sep 3, 2015
Inventor: Pedro Carvalho Oliveira (Palo Alto, CA)
Application Number: 14/431,481
Classifications
International Classification: H04N 21/466 (20060101); H04N 21/475 (20060101); H04N 21/442 (20060101); H04N 21/482 (20060101); G06F 17/30 (20060101); H04N 21/25 (20060101);