CONTEXT-BASED CONTENT RECOMMENDATIONS
Various implementations provide one or more recommendations for content, for example, to a user, based on one or more context categories. In one particular implementation, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. In various implementations, a user provides a selection for the one or more other context categories, the context category, and/or the one or more additional context categories.
This application claims the benefit of U.S. provisional application No. 61/707,077, filed Sep. 28, 2012, and titled “Context-based Content Recommendations”, the contents of which are hereby incorporated by reference herein for all purposes.
TECHNICAL FIELDImplementations are described that relate to providing recommendations. Various particular implementations relate to providing context-based recommendations for various forms of content to be consumed by a user.
BACKGROUNDHome entertainment systems, including television and media centers, are converging with the Internet and providing access to a large number of available sources of content, such as video, movies, TV programs, music, etc. This expansion in the number of available sources necessitates a new strategy for navigating a media interface associated with such systems and making content recommendations and selections.
The large number of possible content sources creates an interface challenge that has not yet been successfully solved in the field of home media entertainment. This challenge involves successfully presenting users with a large number of elements (programs, sources, etc.) without the need to tediously navigate through multiple display pages or hierarchies of content.
Further, most existing search paradigms make an assumption that the user knows what they are looking for when they start, whereas often, an alternate mechanism is more desirable or appropriate. One approach for allowing a process of discovery and cross linkage is the use of ratings. Under this approach a user rates content and a recommendation engine recommends additional content related to the rated content. For example, if a user gives an action movie a five star rating and a horror movie a one star rating, a conventional recommendation engine is likely to recommend other action movies to the user rather than other horror movies. A drawback to this approach is that recommendations tend to be skewed to particular movie genres until a large enough rating database is created over multiple movie genres (for example, action, horror, romance, etc.) by the user. Furthermore, another drawback is that even if a large rating database is created by a user, there still may be inaccurate or non-relevant recommendations since the rating information may have been inaccurately collected from the user. For example, if a user rates the first five horror movies presented for rating as one star movie, the conventional recommendation engine may stop recommending horror movies to the user. However, the user may just not have liked the first five horror movies presented and may actually desire to have other horror movies brought to his or her attention.
SUMMARYAccording to a general aspect, an ordered set of options is provided for a context category related to content selection. The ordered set of options for the context category is ordered based on a previously determined option for one or more other context categories. An ordered set of options is provided for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Even if described in one particular manner, it should be clear that implementations may be configured or embodied in various manners. For example, an implementation may be performed as a method, or embodied as an apparatus, such as, for example, an apparatus configured to perform a set of operations or an apparatus storing instructions for performing a set of operations, or embodied in a signal. Other aspects and features will become apparent from the following detailed description considered in conjunction with the accompanying drawings and the claims.
The inventor has determined various manners in which, for example, a user interface to a content recommendation system can be more helpful. One implementation provides a movie recommendation and discovery engine that takes the user's context into account by getting information about the current day of the week, time of the day, audience or companion(s), and desired content (for example, movie) genre. Based on this information, the system recommends a set of movies that suits the given context. One or more implementations provide a way to take the user's context into account when recommending movies to watch.
The definition of context can vary depending on the content that is to be consumed. “Consuming” content has the well-known meaning of experiencing the content by, for example, watching or listening to the content. For different content, certain aspects of the context are more, or less, important. For example, activity and location are typically not as relevant when considering a movie to recommend as when considering music to recommend.
Various implementations build context categories dynamically and/or automatically, while other implementations rely more on manually built context categories. A context category refers generally to a set of values that can be selected as context. More particularly, a context category often represents a common variable, and includes a set of alternative values for that variable. For example, “Time” is a common variable, and “Friday Night”, and “Saturday Morning” belong to the “Time” category, and may be chosen as values for that variable. As another example, “Genre” is another common variable, and “Action” and “Drama” are possible alternative values for the “Genre” variable.
Dynamically building categories means that the categories are built based on the user input. Because the user input is dynamic, the building of the context categories is dynamic. For example, if a user selects “Friday Night with Friends”, the category “Genre” will be built algorithmically in runtime, based on those selections. “Building” the context category “Genre” refers to determining which values (elements) to include in the category “Genre”, and how to rank-order those values.
In various implementations, the context categories are built automatically. This means that there is, at least primarily, no user intervention, aside from providing the user input of, for example, day, time, and companions, in the creating of categories. Rather, in a pure automatic system, all of the decisions are made by algorithms, not by people. For example, recommending “Action” and “Drama” movies (this is building the context category of “Genre”) on “Friday Night with Friends” was not a decision made by humans directly. The decision is based on data (for example, from previous user studies) and algorithms.
Other implementations build context categories manually by, for example, paying specialists to decide that “On Friday Nights with Friends”, a user should be watching “Action” and “Drama”.
The terms “audience” and “companions” are generally used interchangeably in this application to refer to the set of people consuming the content. However, in other implementations, the terms “audience” and “companions” can refer to distinct context categories. Some values for this context category in various implementations discussed in this application may, strictly speaking, refer to companions and not include the user (for example, “Partner”). Other values may refer to the entire audience including the user or might not include the user (for example, “Family&Kids” or “Friends” may or may not include the user). However, in typical implementations, the content (for example, movie) recommendations are indeed based on the entire audience including the user, whether or not the value for the “audience” or “companions” context category specifically includes the user.
One or more implementations include two main parts: a movie selection system and a user interface. For one or more particular implementations, each will be described below, with reference to the figures. Variations of the movie selection and the user interface are contemplated.
Movie Selection
In this phase, we find which movies can be considered to be the best to watch in a given context. A variety of implementations exist, many of which are dynamic and/or automatic, in whole or in part. Various implementations use the following process:
1. We ask a group of people, given a certain context and a limited set of movies, to decide which movies they find appropriate for the given context. In one example, the given context is that it is Friday night, and the individual (each individual answers independently) is with his/her friends. The individuals in the group are each asked if they would watch, for example, the movie “The Dark Knight”. This provides, for example, a selection of movies for each of several different day/time contexts.
2. From the data acquired in step 1, we are provided data describing which specific movies people watch in a specific context (for example, a day/time/companions context). In order to make this information useful, and to help users navigate this information, we build upon this information. In various implementations, we build upon this information by aggregating the selected movies by their genres, and by performing statistical tests to identify which genres have a best average-rating for a certain context. For example, if a large part of the users said that they would watch “Inception” and “Signs” on a “Friday Night with their Partner”, then “Science Fiction” would be selected as a recommend genre for that context. Other implementations use a variety of tools and techniques to build upon the information from step 1.
3. In the previous step (step 2), we identified which genres are recommended for each context. In this step (step 3), we identify which movies we should recommend to the user (and the rank-order of those movies) for each combination of context and genre. Using the data from step 1, we gather the top-rated movies in the given context that belong to the desired genre (from step 2). If the resulting list is smaller than desired, we add to the list (bootstrap the list) by gathering additional movies that are the most similar to the ones already in the list. For example, if “Finding Nemo” is a good movie to watch in a given context, then it is likely that “Cars” and “Toy Story” will also be good movies (assuming, for example, that the genre is animation) for that context. A variety of tools and techniques are available for use in performing the gathering (bootstrapping) of additional movie titles for a given context and genre. Such tools and techniques include, for example, categorizations, ratings, and reviews of movies.
User Interface
The user interface of various implementations allows users to intuitively navigate through the results from the previous phase and to find movies that are recommended for their current context. One implementation of the process is as follows:
1. The system automatically recognizes the current day of the week and time of the day and presents a list of companions (Alone, Friends, Family&Kids, Partner) ordered by their expected frequency (most expected companions first) (see, for example,
2. After selecting the time (see, for example,
3. After selecting the desired genre, a list of recommended movies is presented (determined, for example, in step 3 of the Movie Selection phase) (see, for example,
4. At any time, users can navigate back and change their previous selections (see, for example,
Referring to
A second form of content is referred to as special content. Special content may include content delivered as premium viewing, pay-per-view, or other content otherwise not provided to the broadcast affiliate manager, for example, movies, video games, or other video elements. The special content can originate from the same, or from a different, content source (for example, content source 102) as the broadcast content provided to the broadcast affiliate manager 104. In many cases, the special content may be content requested by the user. The special content may be delivered to a content manager 110. The content manager 110 may be a service provider, such as an Internet website, affiliated, for instance, with a content provider, broadcast service, or delivery network service. The content manager 110 may also incorporate Internet content into the delivery system. The content manager 110 may deliver the content to the user's receiving device 108 over a separate delivery network, delivery network 2 (112). Delivery network 2 (112) may include high-speed broadband Internet type communications systems. It is important to note that the content from the broadcast affiliate manager 104 may also be delivered using all or parts of delivery network 2 (112) and content from the content manager 110 may be delivered using all or parts of delivery network 1 (106). In addition, the user may also obtain content directly from the Internet via delivery network 2 (112) without necessarily having the content managed by the content manager 110.
Several adaptations for utilizing the separately delivered content may be possible. In one possible approach, the special content is provided as an augmentation to the broadcast content, providing alternative displays, purchase and merchandising options, enhancement material, etc. In another embodiment, the special content may completely replace some programming content provided as broadcast content. Finally, the special content may be completely separate from the broadcast content, and may simply be a media alternative that the user may choose to utilize. For instance, the special content may be a library of movies that are not yet available as broadcast content.
The receiving device 108 may receive different types of content from one or both of delivery network 1 and delivery network 2. The receiving device 108 processes the content, and provides a separation of the content based on user preferences and commands. The receiving device 108 may also include a storage device, such as a hard drive or optical disk drive, for recording and playing back audio and video content. Further details of the operation of the receiving device 108 and features associated with playing back stored content will be described below in relation to
The receiving device 108 may also be interfaced to a second screen such as a touch screen control device 116. The touch screen control device 116 may be adapted to provide user control for the receiving device 108 and/or the display device 114. The touch screen control device 116 may also be capable of displaying video content. The video content may be graphics entries, such as user interface entries (as discussed below), or may be a portion of the video content that is delivered to the display device 114. The touch screen control device 116 may interface to receiving device 108 using any well known signal transmission system, such as infra-red (IR) or radio frequency (RF) communications and may include standard protocols such as infra-red data association (IRDA) standard, Wi-Fi, Bluetooth and the like, or any proprietary protocol. Operations of touch screen control device 116 will be described in further detail below.
In the example of
Referring to
In the device 200 shown in
The decoded output signal is provided to an input stream processor 204. The input stream processor 204 performs the final signal selection and processing, and includes separation of video content from audio content for the content stream. The audio content is provided to an audio processor 206 for conversion from the received format, such as compressed digital signal, to an analog waveform signal. The analog waveform signal is provided to an audio interface 208 and further to the display device or audio amplifier. Alternatively, the audio interface 208 may provide a digital signal to an audio output device or display device using a High-Definition Multimedia Interface (HDMI) cable or alternate audio interface such as via a Sony/Philips Digital Interconnect Format (SPDIF). The audio interface may also include amplifiers for driving one more sets of speakers. The audio processor 206 also performs any necessary conversion for the storage of the audio signals.
The video output from the input stream processor 204 is provided to a video processor 210. The video signal may be one of several formats. The video processor 210 provides, as necessary a conversion of the video content, based on the input signal format. The video processor 210 also performs any necessary conversion for the storage of the video signals.
A storage device 212 stores audio and video content received at the input. The storage device 212 allows later retrieval and playback of the content under the control of a controller 214 and also based on commands, for example, navigation instructions such as fast-forward (FF) and rewind (Rew), received from a user interface 216 and/or touch panel interface 222. The storage device 212 may be a hard disk drive, one or more large capacity integrated electronic memories, such as static RAM (SRAM), or dynamic RAM (DRAM), or may be an interchangeable optical disk storage system such as a compact disk (CD) drive or digital video disk (DVD) drive.
The converted video signal, from the video processor 210, either originating from the input or from the storage device 212, is provided to the display interface 218. The display interface 218 further provides the display signal to a display device of the type described above. The display interface 218 may be an analog signal interface such as red-green-blue (RGB) or may be a digital interface such as HDMI. It is to be appreciated that the display interface 218 will generate the various screens for presenting the search results (for example, as described in more detail below with respect to
The controller 214 is interconnected via a bus to several of the components of the device 200, including the input stream processor 204, audio processor 206, video processor 210, storage device 212, the touch panel interface 222, and the user interface 216. The controller 214 manages the conversion process for converting the input stream signal into a signal for storage on the storage device or for display. The controller 214 also manages the retrieval and playback of stored content. Furthermore, as will be described below, the controller 214 performs searching of content and the creation and adjusting of the displays representing the context and/or the content, for example, as described below with respect to
The controller 214 is further coupled to control memory 220 (for example, volatile or non-volatile memory, including RAM, SRAM, DRAM, ROM, programmable ROM (PROM), flash memory, electronically programmable ROM (EPROM), electronically erasable programmable ROM (EEPROM), etc.) for storing information and instruction code for controller 214. Control memory 220 may store instructions for controller 214. Control memory may also store a database of elements, such as graphic elements representing context values or content. The database may be stored as a pattern of graphic elements, such as graphic elements containing content, various graphic elements used for generating a displayable user interface for display interface 218, and the like. Alternatively, the memory may store the graphic elements in identified or grouped memory locations and use an access or location table to identify the memory locations for the various portions of information related to the graphic elements. Additional details related to the storage of the graphic elements will be described below. Further, the implementation of the control memory 220 may include several possible embodiments, such as a single memory device or, alternatively, more than one memory circuit communicatively connected or coupled together to form a shared or common memory. Still further, the memory may be included with other circuitry, such as portions of bus communications circuitry, in a larger circuit.
Referring to
In one embodiment, the touch panel device 300 may simply serve as a navigational tool to navigate the display (for example, a navigational tool to navigate a display of context options and movie recommendations that is displayed on a TV). In other embodiments, the touch panel device 300 will additionally serve as the display device allowing the user to more directly interact with the navigation through the display of content. The touch panel device 300 may be included as part of a remote control device containing more conventional control functions such as activator and/or actuator buttons. The touch panel device 300 can also include at least one camera element. Note that various implementations employ a large screen TV for the display of, for example, context options and movie recommendations, and employ a user input device similar to a remote control to allow a user to navigate through the display.
Referring to
Referring to
The screen shot of
The window 510 also includes two operational buttons. A “Close” button 540 closes the window 510 which is analogous to exiting the window without changing anything, and a “Set” button 545 sets the system to the selected day 512 and the selected time 514.
Referring to
Other implementations have different audience options, not just ordering differences among the options, for different days/times. For example, other audience options include, in various implementations, “Movie Club”, “Church Group”, and “Work Friends”.
Referring to
In various implementations, the elements of the context set 720 present the name of the selection when a user “hovers” over the icon using, for example, a mouse or other pointing device. For example, when hovering over the clock icon, such implementations provide a small text box that displays the selected day/time, such as, for example, “Friday Night”. As another example, when hovering over the “Friends” icon, such implementations provide a small text box that displays the selected audience, which is “Friends” in this example.
The list 710 of movie genres includes four options for movie genres, listed in order (from left to right) of most likely to least likely. Those options are (i) a Thriller genre 732 (shown by an icon of a ticking bomb), (ii) a Crime genre 734 (shown by an icon of a rifle scope), (iii) a Science Fiction genre 736 (shown by an icon of an atom), and (iv) an Action genre 738 (shown by an icon of a curving highway). That is, the system believes that on Friday night, if the user is watching a movie with friends, then the most likely movie genres to be watched are, in decreasing order of likelihood, thriller, crime, science fiction, and action.
Referring to
The screen shot 800 includes a new ordered list 810 of movie genres that is based on the new audience that has been selected. The list 810 provides the following genres, in order from most likely to least likely: (i) the Science Fiction genre, (ii) a Fantasy genre 842 (shown by an icon of a magic wand with a star on top), (iii) a Comedy genre 844 (shown by an icon of a smiley face), and (iv) a Drama genre 846 (shown by an icon of a heartbeat as typically shown on a heart rate monitor used with an electrocardiogram). By comparing the list 710 with the list 810, it is clear that the system believes different movie genres are more, or less, likely to be selected by the different audiences. Indeed, the list 710 and the list 810 have different genres, and not just a different ordering of the same set of genres.
Referring to
As described earlier, various implementations display a text box with the name of a selected context element when a user hovers over that element in the context set 920. For example, when hovering over the genre element 926, such implementations provide a small text box that displays the selected genre, such as, for example, “Science Fiction”.
The screen shot 900 includes an ordered set 910 of eight movie recommendations, with the highest recommendation at the top-left, and the lowest recommendation at the bottom-right. The set 910 includes, from highest recommendation to lowest recommendation: (i) a first recommendation 931, which is “Inception”, (ii) a second recommendation 932, which is “Children of men”, (iii) a third recommendation 933, which is “signs”, (iv) a fourth recommendation 934, which is “Super 8”, (v) a fifth recommendation 935, which is “Déjà vu”, (vi) a sixth recommendation 936, which is “Moon”, (vii) a seventh recommendation 937, which is “Knowing”, and (viii) an eighth recommendation 938, which is “Happening”. The eight recommendations are the movies that the system has selected as being the most likely to be selected for viewing by the user in the selected context.
More, or fewer, recommendations can be provided in different implementations. Additionally, the movies can be presented in various orders, including for example, (i) ordered from highest to lowest recommendation from top to bottom and left to right, such that the highest recommendation is top-left (reference element 931) and the second highest recommendation is bottom-left (reference element 935), etc., (ii) ordered with the highest recommendations near the middle, (iii) ordered alphabetically, or (iv) randomly arranged. The screen shot 910 shows movie posters, however other implementations merely list the titles.
The user is able to select a movie from the screen shot 910. Upon selection, one or more of a variety of operations may occur, including, for example, playing the movie, receiving information about the movie, receiving a payment screen for paying for the movie, etc.
The user has other options in various implementations, besides selecting a displayed movie poster. For example, in certain implementations allow a user to remove movies from the list of recommendations using, for example, a close button associated with the movie's poster. In various of such implementations another movie is recommended and inserted as a replacement for the removed movie poster. Some implementations remember the user's selections and base future recommendations, in part, on these selections. Other implementations also allow more, or less, than eight movie posters to be displayed at a given time.
Referring to
In this implementation, information about the selected movie is provided to the user, as shown in the window 1000. The window 1000 includes: (i) the movie title and year of release 1010, (ii) the movie poster 1020, (iii) a summary 1030 of the movie, and (iv) a set 1040 of options for viewing the movie.
The set 1040 includes, in this implementation, four links to external sources of the selected movie “Moon”. The set 1040 includes (i) an AllMovie button 1042 to select AllMovie (http/www.allmovie.com/) as the external source, (ii) an IMDB button 1044 to select IMDB (http//www.imdb.com/) as the external source, (iii) an Amazon button 1046 to select Amazon (http://www.amazon.com/) as the external source, and (iv) a Netflix button 1048 to select Netflix (https://www.netflix.com/) as the external source.
A user is also able to navigate back to the selection screen of the screen shot 900. By selecting a part of the overlaid screen shot 900, in
Referring to
Referring to
Referring to
Referring to
Referring to
The process 1500 further includes providing a set of options for one or more additional context categories, ordered based on an option for the given context category (1520). Continuing with the example discussed above, in one particular implementation, the operation 1520 includes providing an ordered set of options for one or more additional context categories related to content selection. The ordered set of options for the one or more additional context categories is ordered based on an identification of an option from the provided options for the context category. The operation 1520 is performed in various implementations using, for example, one of the screen shots from any of
Variations of the process 1500 further include receiving user input identifying one of the provided options for (i) the one or more other context categories, and/or (ii) the one or more additional context categories. This user input operation is performed in various implementations, for example, as discussed above in moving from
The process 1500 can be performed using, for example, the structure provided in
Referring to
In other implementations, however,
In another distributed implementation, the presentation device 1620 and the user input device 1610 are integrated into a second screen such as, for example, a tablet. The processor 1630 is in a STB. The STB controls both the tablet and a primary screen TV. The tablet receives and displays screen shots from the STB, providing movie recommendations. The tablet accepts and transmits input from the user to the STB, in which the user interacts with the content on the screen shots. The STB does the processing for the movie recommendation system, although various implementations do have a processor in the tablet.
The processor 1630 of
The presentation device 1620 is, for example, any device suitable for providing any of the sensory indications described throughout this application. Such devices include, for example, all user interface devices described throughout this application. Such devices also include, for example, the display components shown or described with respect to
The system/apparatus 1600 is used, in various implementations to perform one or more of the processes shown in
The system/apparatus 1600 is also used, in various implementations, to provide one or more of the screen shots of
Various implementations of the system/apparatus 1600 include only the presentation device 1620 and the processor 1630, and do not include the user input device 1610. Such systems are able to make content recommendations on the presentation device 1620. Additionally, such implementations are able to access selections for context categories using one or more of, for example, (i) default values, (ii) values from profiles, and/or (iii) values accessed over a network.
Additional implementations provide a user with options for simultaneously selecting values for multiple context categories at the same time. For example, upon receiving user selection of time and day in
Various implementations discuss context. As previously discussed, context is indicated or described, for example, by context categories that describe an activity. Each activity (for example, consuming content such as a movie) can have its own context categories. One manner of determining context categories is to answer the common questions of “who”, “what”, “where”, “when”, “why”, and “how”. For example, if the activity is defined as consuming content, the common questions can result in a variety of context categories, as discussed below:
“Who” is consuming the content? For example, the audience is a context category. Additionally, or alternatively, separate context categories can be used for demographic information such as age, gender, occupation, education achieved, location of upbringing, and previously observed behavior for an individual in the audience.
“What” content is being consumed? For example, the genre of the content is a context category. Additionally, or alternatively, separate context categories can be used for the length of the content, and the maturity ranking of the content (for example, G, PG-13, or R).
“Where” is the content being consumed? For example, the location is a context category and can have values such as, for example, in a home, in an auditorium, in a vehicle such as a plane or car, in the Deep South, or in the North East. Additionally, or alternatively, separate context categories can be used for room characteristics (for example, living room, auditorium, or airplane cabin) and geographical location (for example, Deep South).
“When” is the content being consumed? For example, the day-and-time is a context category. Additionally, or alternatively, separate context categories can be used for the day, the time, the calendar season (winter, spring, summer, or fall), and the holiday season (for example, Christmas, Thanksgiving, or Fourth of July), as discussed further below.
“Why” is the content being consumed? For example, the occasion is a context category and can have values such as, for example, a wedding anniversary, a child's birthday party, or a multi-generational family reunion.
“How” is the content being consumed? For example, the medium being used is a context category and can have values such as, for example, a small screen, a large screen, a mobile device, a low-speed connection, a high-speed connection, or surround sound. Additionally, or alternatively, separate context categories can be used for screen size, connection speed, and sound quality.
Other manners of determining context categories may also be used.
Different implementations vary one or more of a number of features. Some of those features, and their variations, are described below:
Various implementations use different presentation devices. Such presentation devices include, for example, a television (“TV”) (with or without picture-in-picture (“PIP”) functionality), a computer display, a laptop display, a personal digital assistant (“PDA”) display, a cell phone display, and a tablet (for example, an iPad) display. The display devices are, in different implementations, either a primary or a secondary screen. Still other implementations use presentation devices that provide a different, or additional, sensory presentation. Display devices typically provide a visual presentation. However, other presentation devices provide, for example, (i) an auditory presentation using, for example, a speaker, or (ii) a haptic presentation using, for example, a vibration device that provides, for example, a particular vibratory pattern, or a device providing other haptic (touch-based) sensory indications.
Various implementations provide content recommendations based on other contextual information. One category of such information includes, for example, an emotional feeling of the user. For example, if the user is happy, sad, lonely, etc., the system can provide a different set of recommendations appropriate to the emotional state of the user. In one particular implementation, the system provides, based on, for example, user history or objective input from other users, a rank-ordered set of genres and/or content based on the day, the time, the audience, and the user's emotional state.
As discussed above, another example of additional contextual information is “season”. Certain implementations provide indicators of a calendar season that include “summer”, “fall”, “winter”, and “spring”. Certain other implementations provide indicators of a holiday season that include “Christmas”, “Thanksgiving”, “Halloween”, and “Valentine's Day”. Obviously, certain implementations include both categories and their related values. As can be expected, a rank-ordering of movie genres can be expected to change based on the season. Additionally, a rank-ordering of movies within a genre can be expected to change based on the season.
Various implementations, as should be clear from earlier statements, base genre recommendations and/or movie recommendations on contextual information that is different from that described in
Various implementations receive user input identifying a value, or a selection, for a particular context category. Other implementations access a selection, or input, in other manners. For example, certain implementations receive input from other members of an audience using, for example, any of a variety of “second screens” such as, for example, a tablet or a smartphone. As another example, certain implementations use default selections when no user input is available or received. As another example, certain implementations access use profiles, access databases from the Internet, or access other remote sources, for input or selections.
Various implementations describe receiving a single value or selection for a particular context category. For example,
This application provides multiple figures, including the block diagrams of
For example, the block diagrams certainly describe an interconnection of functional blocks of an apparatus or system. However, it should also be clear that the block diagrams provide a description of a process flow. As an example,
For example, the flow diagrams certainly describe a flow process. However, it should also be clear that the flow diagrams provide an interconnection between functional blocks of a system or apparatus for performing the flow process. For example, reference element 1510 also represents a block for performing the function of providing a user an ordered set of options for a given context category. Other blocks of
For example, the screen shots of
We have thus provided a number of implementations. Various implementations provide content recommendations based on context. Various other implementations also provide context selections that are ranked according to frequency or likelihood. Various other implementations provide content recommendations that are also ranked according to frequency or likelihood.
It should be noted, however, that variations of the described implementations, as well as additional applications, are contemplated and are considered to be within our disclosure. Additionally, features and aspects of described implementations may be adapted for other implementations.
Reference to “one embodiment” or “an embodiment” or “one implementation” or “an implementation” of the present principles, as well as other variations thereof, mean that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment” or “in one implementation” or “in an implementation”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
Additionally, this application or its claims may refer to “determining” various pieces of information. Determining the information may include one or more of, for example, estimating the information, calculating the information, predicting the information, or retrieving the information from memory.
Further, this application or its claims may refer to “accessing” various pieces of information. Accessing the information may include one or more of, for example, receiving the information, retrieving the information (for example, memory), storing the information, processing the information, transmitting the information, moving the information, copying the information, erasing the information, calculating the information, determining the information, predicting the information, or estimating the information.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C” and “at least one of A, B, or C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
Various implementations refer to a set of options for a context category. A “set” can be represented in various manners, including, for example, in a list, or another visual representation.
Additionally, many implementations may be implemented in a processor, such as, for example, a post-processor or a pre-processor. The processors discussed in this application do, in various implementations, include multiple processors (sub-processors) that are collectively configured to perform, for example, a process, a function, or an operation. For example, the processor 1630, the audio processor 206, the video processor 210, and the input stream processor 204, as well as other processing components such as, for example, the controller 214, are, in various implementations, composed of multiple sub-processors that are collectively configured to perform the operations of that component.
The implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method), the implementation of features discussed may also be implemented in other forms (for example, an apparatus or program). An apparatus may be implemented in, for example, appropriate hardware, software, and firmware. The methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, computers, cell phones, tablets, portable/personal digital assistants (“PDAs”), and other devices that facilitate communication of information between end-users.
Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications. Examples of such equipment include an encoder, a decoder, a post-processor, a pre-processor, a video coder, a video decoder, a video codec, a web server, a television, a set-top box, a router, a gateway, a modem, a laptop, a personal computer, a tablet, a cell phone, a PDA, and other communication devices. As should be clear, the equipment may be mobile and even installed in a mobile vehicle.
Additionally, the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD”), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”). The instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination. Instructions may be found in, for example, an operating system, a separate application, or a combination of the two. A processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
As will be evident to one of skill in the art, implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted. The information may include, for example, instructions for performing a method, or data produced by one of the described implementations. For example, a signal may be formatted to carry as data the rules for writing or reading syntax, or to carry as data the actual syntax-values generated using the syntax rules. Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal. The formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream. The information that the signal carries may be, for example, analog or digital information. The signal may be transmitted over a variety of different wired or wireless links, as is known. The signal may be stored on a processor-readable medium.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made. For example, elements of different implementations may be combined, supplemented, modified, or removed to produce other implementations. Additionally, one of ordinary skill will understand that other structures and processes may be substituted for those disclosed and the resulting implementations will perform at least substantially the same function(s), in at least substantially the same way(s), to achieve at least substantially the same result(s) as the implementations disclosed. Accordingly, these and other implementations are contemplated by this application.
Claims
1. A method comprising:
- providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
- providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
2. The method of claim 1 further comprising:
- providing one or more content recommendations to the user based on (i) the identification of the option for the context category, and (ii) an identification of an option from the provided options for the one or more additional context categories.
3. The method of claim 1 wherein:
- the set of options for the context category is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the previously determined option for the one or more other context categories, and
- the set of options for the one or more additional context categories is ordered based on likelihood of selection, and is provided in an order reflecting likelihood of selection, and the likelihood is based on the identification of the option for the context category.
4. The method of claim 1 wherein providing the one or more 30 content recommendations comprises providing the one or more content recommendations in an order reflecting likelihood of selection.
5. The method of claim 1 wherein at least one of the context category or the one or more additional context categories includes one or more of (i) day of the week for intended content consumption, (ii) time of the day for 5 intended content consumption, (iii) season for intended content consumption, (iv) emotional feeling of a user, (v) the intended audience that will be consuming the content, or (vi) the genre of the content.
6. The method of claim 1 wherein:
- the context category includes the intended audience that will be consuming the content, and
- the one or more additional context categories includes the genre of the content.
7. The method of claim 1 wherein:
- the one or more context categories include one or more of (i) day of the week for intended content consumption, or (ii) time of the day for intended content consumption.
8. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.
9. The method of claim 1 wherein providing the one or more content recommendations is further based on one or more of extrapolations and/or machine learning applied to input from one or more of (i) tracked information from a user's behavior and/or (ii) collected information from users.
10. The method of claim 1 further comprising receiving a user input as the identification of the option for the context category.
11. The method of claim 2 further comprising receiving a user input as the identification of the option for the one or more additional context categories.
12. An apparatus configured to perform one or more of the methods of claim 1.
13. The apparatus of claim 12 comprising one or more processors collectively configured to perform one or more of the methods.
14. An apparatus comprising:
- means for providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
- means for providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
15. An apparatus comprising:
- a presentation device; and
- a processor configured to provide on the presentation device an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories, wherein the processor is further configured to provide on the presentation device an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
16. The apparatus of claim 15 further comprising a user input device configured to receive a user input as the identification of the option for the context category.
17. The apparatus of claim 16 wherein the presentation device and the user input device are integrated into a single unit.
18. An apparatus comprising one or more processors collectively configured to perform the following operations:
- providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
- providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
19. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform the following operations:
- providing an ordered set of options for a context category related to content selection, the ordered set of options for the context category being ordered based on a previously determined option for one or more other context categories; and
- providing an ordered set of options for one or more additional context categories related to content selection, the ordered set of options for the one or more additional context categories being ordered based on an identification of an option from the provided options for the context category.
20. A processor readable medium having stored thereon instructions for causing one or more processors to collectively perform one or more of the methods of claim 1.
Type: Application
Filed: Dec 17, 2012
Publication Date: Sep 3, 2015
Inventor: Pedro Carvalho Oliveira (Palo Alto, CA)
Application Number: 14/431,481