System and method for capturing a multimedia content item by a mobile device and matching sequentially relevant content to the multimedia content item

- Cortica, Ltd.

A method and system for matching sequentially relevant content to at least one multimedia content item (MMCI) captured by a mobile device are provided. The method includes extracting at least one MMCI from the mobile device; generating a signature for the extracted at least one MMCI; matching the generated signature to a plurality of signatures of content items; and determining, based on the matching, at least one sequentially relevant content item.

Skip to: Description  ·  Claims  ·  References Cited  · Patent History  ·  Patent History
Description
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 14/167,388 filed on Jan. 29, 2014, now allowed, which claims the benefit of U.S. Provisional Application No. 61/766,703 filed on Feb. 20, 2013. The Ser. No. 14/167,388 application is also a continuation-in-part (CIP) application of U.S. patent application Ser. No. 13/685,182 filed on Nov. 26, 2012, now U.S. Pat. No. 9,235,557, which is a CIP of:

(a) U.S. patent application Ser. No. 13/624,397 filed on Sep. 21, 2012, now U.S. Pat. No. 9,191,626;

(b) U.S. patent application Ser. No. 13/344,400 filed on Jan. 5, 2012, now U.S. Pat. No. 8,959,037, which is a continuation of U.S. patent application Ser. No. 12/434,221, filed May 1, 2009, now U.S. Pat. No. 8,112,376;

(c) U.S. patent application Ser. No. 12/084,150 having a filing date of Apr. 7, 2009, now U.S. Pat. No. 8,655,801, which is the National Stage of International Application No. PCT/IL2006/001235, filed on Oct. 26, 2006, which claims foreign priority from Israeli Application No. 171577 filed on Oct. 26, 2005 and Israeli Application No. 173409 filed on 29 Jan. 2006; and,

(d) U.S. patent application Ser. No. 12/195,863, filed Aug. 21, 2008, now U.S. Pat. No. 8,326,775, which claims priority under 35 USC 119 from Israeli Application No. 185414, filed on Aug. 21, 2007, and which is also a continuation-in-part (CIP) of the above-referenced U.S. patent application Ser. No. 12/084,150.

All of the applications referenced above are herein incorporated by reference for all that they contain.

TECHNICAL FIELD

The disclosure generally relates to systems and methods for capturing information viewed on a mobile device, and more specifically to systems and methods for enabling display of matching relevant content to multimedia content captured by a mobile device.

BACKGROUND

As the World Wide Web (WWW) continues to exponentially grow in size and content, the task of finding relevant multimedia content becomes increasingly complex. Upon finding such content, many users wish to have the ability to view the content at a later time, from a different device or source. Current communication tools such as web browsers or search engines do not provide an easy and convenient way for a person to view, locate, or otherwise access information at a later time and/or from a different device. One solution that caters to this need involves use of bookmarks (URLs) to a web page that contains the specific content. However, URLs, and by extension bookmarks pointing to web pages, are dynamically changed. Thus, visiting a URL, e.g., a week later, may lead users to content other than the particular content that the users' wish to view.

Other solutions allow viewing the same web page across different devices. For example, a web page CNN.COM opened on a smartphone device can be opened on the PC when a user launches the browser thereon. However, only a recent session on one device can be displayed (or opened) on another device. For example, the last viewed web page or video. In addition, such a solution requires that the devices be registered with a particular user. Thus, content viewed on a device that does not belong to the user can be later viewed on the user's device without having the user search, locate and access the content.

It would be therefore advantageous to provide a solution for providing an efficient way for a user to mark a multimedia content such that the user is capable of accessing the multimedia content item or a successive related content item at a later time and/or from a different device.

SUMMARY

Certain embodiments disclosed herein include a method for matching sequentially relevant content to at least one multimedia content item (MMCI) stored in a mobile device. The method includes extracting at least one MMCI from the mobile device; generating a signature for the extracted at least one MMCI; matching the generated signature to a plurality of signatures of content items; and determining, based on the matching, at least one sequentially relevant content item.

Certain embodiments disclosed herein also include a system for matching sequentially relevant content to at least one MMCI captured by a mobile device. The system comprises a processing unit; and a memory coupled to the processing unit, the memory contains instructions that, when executed by the processing unit, configure the system to: extract at least one MMCI from the mobile device; generate a signature for the extracted at least one MMCI; match the generated signature to a plurality of signatures of content items; and determine, based on the matching, at least one sequentially relevant content item.

BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter disclosed herein is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other objects, features, and advantages of the disclosed embodiments will be apparent from the following detailed description taken in conjunction with the accompanying drawings.

FIG. 1 is a schematic block diagram of a system utilized to describe the various disclosed embodiments.

FIG. 2 is a flowchart describing the process of matching a content item to multimedia content displayed on a web-page according to an embodiment.

FIG. 3 is a block diagram depicting the basic flow of information in the signature generator system.

FIG. 4 is a diagram showing the flow of patches generation, response vector generation, and signature generation in a large-scale speech-to-text system.

FIG. 5 is a flowchart describing a process of matching sequentially relevant content items to a multimedia content item according to an embodiment.

DETAILED DESCRIPTION

It is important to note that the embodiments disclosed herein are only examples of the many advantageous uses of the innovative teachings herein. In general, statements made in the specification of the present application do not necessarily limit any of the various claimed e. Moreover, some statements may apply to some inventive features but not to others. In general, unless otherwise indicated, singular elements may be in plural and vice versa with no loss of generality. In the drawings, like numerals refer to like parts through several views.

FIG. 1 shows an exemplary and non-limiting schematic diagram of a system 100 for providing content items for matching multimedia content displayed in a web-page in accordance with one embodiment. A network 110 is used to communicate between different parts of the system. The network 110 may be the Internet, the world-wide-web (WWW), a local area network (LAN), a wide area network (WAN), a metro area network (MAN), and other networks capable of enabling communication between the elements of the system 100.

Further connected to the network 110 are one or more computing devices, 120-1 through 120-n (collectively referred to hereinafter as computing devices 120 or individually as a computing device 120, merely for simplicity purposes). Each computing device 120 may be, for example, a personal computer (PC), a personal digital assistant (PDA), a mobile phone, a smart phone, a tablet computer, a wearable computing device, and other kinds of wired and mobile appliances, equipped with browsing, capturing, viewing, listening, filtering, and managing content capabilities, etc., that are enabled as further discussed herein below.

Each computing device 120 executes at least one application 125 that can render multimedia content captured by the device (e.g., from a video camera) over the device's display. An application 125 also communicates with the server 130, enabling performance of the embodiments disclosed in detail above. As a non-limiting example, an application 125 is a web browser, independent application, or plug-in application.

The server 130 is further connected to the network 110. The server 130 communicates with the devices 120 to receive captured multimedia content or links to such content. The content provided by the devices 120 may be associated with a request to view the content on a different device 120 or the same device at a different time. Such a request is sent by the application 125. The multimedia content may include, for example, an image, a graphic, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, and an image of signals (e.g., spectrograms, phasograms, scalograms, etc.), and/or combinations thereof and portions thereof.

The server 130 is also communicatively connected to a signature generator system (SGS) 140, either directly or through the network 110. In one embodiment, the SGS 140 may be embedded in the server 130. The server 130 is enabled to receive and serve multimedia content and causes the SGS 140 to generate a signature respective of the multimedia content. The generated signature(s) may be robust to noise and distortion. The process for generating the signatures for multimedia content is explained in more detail herein below with respect to FIGS. 3 and 4.

It should be noted that the server 130 typically comprises a processing unit and a memory (not shown). The processor is coupled to the memory, which is configured to contain instructions that can be executed by the processing unit. The server 130 also includes a network interface (not shown) to the network 110. In one embodiment, the server 130 is communicatively connected or includes an array of Computational Cores configured as discussed in more detail below.

A plurality of web servers 150-1 through 150-m are also connected to the network 110, each of which is configured to generate and send content to the server 130. In an embodiment, the web servers 150-1 through 150-m typically, but not necessarily exclusively, are resources for information that be utilized to provide multimedia content relevant to a multimedia content items captured by the devices 120. In one embodiment, the content and the respective generated signatures may be stored in a data warehouse 160 which is connected to the server 130 (either directly or through the network 110) for further use.

The system 100 may be configured to generate customized channels of multimedia content. Accordingly, a web browser 125 or a client channel manager application (not shown) available either on the server 130, on the web browser 120, or as an independent or plug-in application, may enable a user to create customized channels of multimedia content by receiving selections made by a user as inputs. Such customized channels of multimedia content are personalized content channels that are generated in response to selections made by a user of the application 125 or the client channel manager application. The system 100, and in particular the server 130 in conjunction with the SGS 140, determines which multimedia content is more suitable to be viewed, played or otherwise utilized by the user with respect to a given channel, based on the signatures of selected multimedia content. These channels may optionally be shared with other users, used and/or further developed cooperatively, and/or sold to other users or providers, and so on. The process for defining, generating, and customizing the channels of multimedia content are described in greater detail in the co-pending Ser. No. 13/344,400 application referenced above.

To demonstrate the disclosed embodiments, it is assumed, merely for the sake of simplicity and without limitation on the generality of the disclosed embodiments, that computing device 120-1 is a mobile or handheld device belonging to a certain user.

According to the disclosed embodiments, a multimedia content item is captured by the device 120-1. The capturing of the multimedia content item may be performed by, e.g., one or more sensors integrated within the device 120-1. In an embodiment, the capturing also includes selection of a multimedia content item stored locally in the device 120-1. A sensor may be, for example, but not limited to, a camera, a microphone, a temperature sensor, a global positioning system (GPS), a light intensity sensor, an image analyzer, a sound sensor, an ultrasound sensor, a speech recognizer, and so on.

The server 130 is configured to receive and analyze the captured multimedia content item. This includes querying or requesting the SGS 140 to generate for the multimedia content item, provided by the server 130, at least one signature. Then, using the generated signature(s) the server 130 is configured to search web servers 150-1 through 150-m for sequentially relevant multimedia content items.

Sequentially relevant content items are content items related to a particular multimedia content item (MMCI) through a predetermined sequence or series such as, but not limited to, episodes of a television show; pages or chapters of a book; books in a series (e.g., the Harry Potter series of books); pages or issues of a comic strip or magazine; groups of similar items arranged in chronological order; etc. Such sequentially relevant items may be the content item that the MMCI was captured from (e.g., a video containing a captured MMCI in the form of an image), a successively related content item, etc. Successively related content items are content items that occur later in a sequence or series than a given content item.

For example, if the MMCI being analyzed is an image from the first episode of the television show “Breaking Bad,” a successively related content item to the first episode of the show may be the second episode of “Breaking Bad.” Note that any given sequence or series may have a length of 1 (e.g., only 1 content item in the sequence or series). As a non-limiting example, if the signature of an image indicates that the image belongs to a particular episode of the television (TV) series “Lost” (e.g., episode 1) then the sequentially relevant episode of “Lost” (episode 1) or a link thereto, is sent to the user device 120-1.

In an embodiment, a relevant video content item may feature associations of particular times within the video to given MMCIs. For example, if the signature of an image indicates that the image represents a frame related to a particular time of an episode of the television show American Idol® (e.g., season 1, episode 5, 20 minutes and 31 seconds from the beginning of the episode), then a sequentially relevant video of season 1 episode 5 would begin playing at 20 minutes and 31 seconds from the beginning of the episode after the episode is returned to the user device.

In an embodiment, MMCIs may belong to more than one sequence and/or series. For example, if a user captures a video clip from the movie Rocky, a sequence of related content items may consist solely of movies in the Rocky series. Another sequence of related items to the image of the Rocky movie may be a list of all movies in which actor Sylvester Stallone appears ordered chronologically. In a further embodiment, a user may be prompted to select which sequence or series he or she would like to receive sequentially related content from.

In another embodiment, the server 130 may return a successive issue of the magazine (e.g., issue 5 from 2013). As a non-limiting example, a user may capture an image of a page from an issue of Time® magazine (e.g., issue 4 from 2013, page 30). Based on this image, the server 130 may determine which page and/or issue the image is from, and return that page and/or issue to the user. In yet another embodiment, the options for sequences or series presented to a user are pre-populated based on inputs from other users.

According to another embodiment, the SGS 140 is integrated in the computing device 120-1, thereby allowing the computing device 120-1 to capture and analyze one or more MMCIs while operating without a network connection. As an example, an image of a TV screen presenting an advertisement of a Dodge® automobile is captured by the camera of the device 120-1 while operating off-line. According to another embodiment, the request for relevant content is sent to the server 130, which searches the network 110 for relevant content. As an example, such content may be pictures of similar vehicles ordered by the year each car was released in that are stored in the pictures storage folder of the mobile device 610. If the device 120-1 can access the server 130, then the server 130 will initiate the search for matching content relevant to the captured multimedia element item.

The SGS 140 is configured to analyze the captured image and to generate one or more signatures respective thereto. Upon receiving a request for sequentially relevant content to the captured image, the mobile device 120-1 searches for relevant content locally stored therein. According to one embodiment, the determination of which content is relevant may be performed by, for example, a signature matching process based on signatures generated as discussed hereinabove. In an exemplary embodiment, two signatures are determined to be matching if their respective signatures at least partially match (e.g., in comparison to a predefined threshold).

It should be appreciated that the signature generated for a captured multimedia content element would enable accurate recognition of sequentially relevant content, because the signatures generated, according to the disclosed embodiments, allow for recognition and classification of multimedia elements, such as, content-tracking, video filtering, multimedia taxonomy generation, video fingerprinting, speech-to-text, audio classification, element recognition, video/image search and any other application requiring content-based signatures generation and matching for large content volumes such as, web and other large-scale databases.

FIG. 2 depicts an exemplary and non-limiting flowchart 200 describing the process of matching an advertisement to multimedia content displayed on a web-page. In S205, the method starts when a web-page is uploaded to an application 125 (e.g., a web-browser). In S210, a request to match at least one MMCI (or a multimedia content element) contained in the uploaded web-page to an appropriate advertisement item is received. The request can be received from a web server (e.g., a server 150-1), a script running on the uploaded web-page, or an agent (e.g., an add-on) installed in the web-browser. S210 can also include extracting the at least one MMCI and requesting that signatures be generated.

In S220, a signature to the multimedia content element is generated. The generation of the signature for the multimedia content element by a signature generator is described below. In S230, an advertisement item matched to the multimedia content element respective of its generated signature. In one embodiment, the matching process includes searching for at least one advertisement item respective of the signature of the multimedia content and a display of the at least one advertisement item within the display area of the web-page. In one embodiment, the matching of an advertisement to a multimedia content element can be performed by the computational cores that are part of a large scale matching discussed in detail below.

In S240, upon a user's gesture the advertisement item is uploaded to the web-page and displayed therein. The user's gesture may be: a scroll on the multimedia content element, a press on the multimedia content element, and/or a response to the multimedia content. This ensures that the user attention is given to the advertised content. In S250 it is checked whether there are additional requests to analyze multimedia content elements, and if so, execution continues with S210; otherwise, execution terminates.

As a non-limiting example of a matching process, a user uploads a web-page that contains an image of a sea shore. The image is then analyzed and a signature is generated respective thereto. Respective of the image signature, an advertisement item (e.g., a banner) is matched to the image, for example, a swimsuit advertisement. Upon detection of a user's gesture, for example, a mouse scrolling over the sea shore image, the swimsuit ad is displayed.

The web-page may contain a number of multimedia content elements; however, in some instances only a few content items may be displayed in the web-page. Accordingly, in one embodiment, the signatures generated for the multimedia content elements are clustered and the cluster of signatures is matched to one or more advertisement items.

FIGS. 3 and 4 illustrate the generation of signatures for the multimedia content elements by the SGS 140 according to one embodiment. An exemplary high-level description of the process for large scale matching is depicted in FIG. 3. In this example, the matching is for a video content.

Video content segments 2 from a Master database (DB) 6 and a Target DB 1 are processed in parallel by a large number of independent computational Cores 3 that constitute an architecture for generating the Signatures (hereinafter the “Architecture”). Further details on the computational Cores generation are provided below. The independent Cores 3 generate a database of Robust Signatures and Signatures 4 for Target content-segments 5 and a database of Robust Signatures and Signatures 7 for Master content-segments 8. An exemplary and non-limiting process of signature generation for an audio component is shown in detail in FIG. 4. Finally, Target Robust Signatures and/or Signatures are effectively matched, by a matching algorithm 9, to Master Robust Signatures and/or Signatures database to find all matches between the two databases.

To demonstrate an example of signature generation process, it is assumed, merely for the sake of simplicity and without limitation on the generality of the disclosed embodiments, that the signatures are based on a single frame, leading to certain simplification of the computational cores generation. The Matching System is extensible for signatures generation capturing the dynamics in-between the frames.

The Signatures generation process will now be described with reference to FIG. 4. The first step in the process of signatures generation from a given speech-segment is to breakdown the speech-segment to K patches 14 of random length P and random position within the speech segment 12. The breakdown is performed by the patch generator component 21. The value of the number of patches K, random length P and random position parameters is determined based on optimization, considering the tradeoff between accuracy rate and the number of fast matches required in the flow process of the server 130 and SGS 140. Thereafter, all the K patches are injected in parallel to all computational Cores 3 to generate K response vectors 22, which are fed into a signature generator system 23 to produce a database of Robust Signatures and Signatures 4.

In order to generate Robust Signatures, i.e., Signatures that are robust to additive noise L (where L is an integer equal to or greater than 1) by the Computational Cores 3 a frame ‘i’ is injected into all the Cores 3. Then, Cores 3 generate two binary response vectors: {right arrow over (S)} which is a Signature vector, and {right arrow over (RS)} which is a Robust Signature vector.

For generation of signatures robust to additive noise, such as White-Gaussian-Noise, scratch, etc., but not robust to distortions, such as crop, shift and rotation, etc., a core Ci={ni} (1≦i≦L) may consist of a single leaky integrate-to-threshold unit (LTU) node or more nodes. The node ni equations are:

V i = j w ij k j n i = ( Vi - Th x )

where, □ is a Heaviside step function; wij is a coupling node unit (CNU) between node i and image component j (for example, grayscale value of a certain pixel j); kj is an image component ‘j’ (for example, grayscale value of a certain pixel j); ThX is a constant Threshold value, where ‘x’ is ‘S’ for Signature and ‘RS’ for Robust Signature; and Vi is a Coupling Node Value.

The Threshold values ThX are set differently for Signature generation and for Robust Signature generation. For example, for a certain distribution of Vi values (for the set of nodes), the thresholds for Signature (ThS) and Robust Signature (ThRS) are set apart, after optimization, according to at least one or more of the following criteria:
For: Vi>ThRS
1−p(V>ThS)−1−(1−ε)l<<1  1:
i.e., given that l nodes (cores) constitute a Robust Signature of a certain image I, the probability that not all of these I nodes will belong to the Signature of same, but noisy image, Ĩ is sufficiently low (according to a system's specified accuracy).
p(Vi>ThRS)≈l/L  2:
i.e., approximately l out of the total L nodes can be found to generate a Robust Signature according to the above definition.

3: Both Robust Signature and Signature are generated for certain frame i.

It should be understood that the generation of a signature is unidirectional, and typically yields lossless compression, where the characteristics of the compressed data are maintained but the uncompressed data cannot be reconstructed. Therefore, a signature can be used for the purpose of comparison to another signature without the need of comparison to the original data. The detailed description of the Signature generation can be found in U.S. Pat. Nos. 8,326,775 and 8,312,031, assigned to common assignee, which are hereby incorporated by reference for all the useful information they contain.

A Computational Core generation is a process of definition, selection, and tuning of the parameters of the cores for realizing certain goals in a specific system and application. The process is based on several design considerations, such as:

(a) The Cores should be designed so as to obtain maximal independence, i.e., the projection from a signal space should generate a maximal pair-wise distance between any two cores' projections into a high-dimensional space.

(b) The Cores should be optimally designed for the type of signals being used, i.e., the Cores should be maximally sensitive to the spatio-temporal structure of the injected signal, for example, and in particular, sensitive to local correlations in time and space. Thus, in some cases a core represents a dynamic system, such as in state space, phase space, edge of chaos, etc., which is uniquely used herein to exploit their maximal computational power.

(c) The Cores should be optimally designed with regard to invariance to a set of signal distortions, of interest in relevant applications.

Detailed description of the Computational Core generation, the computational architecture, and the process for configuring such cores is discussed in more detail in the co-pending U.S. patent application Ser. No. 12/084,150 referenced above.

FIG. 5 depicts an exemplary and non-limiting flowchart 500 of a method for analyzing multimedia content items captured by a mobile device and returning sequentially relevant content items to a user according to an embodiment. In an embodiment, the method is performed by the server 130 using the SGS 140.

In S510, the method starts when a multimedia content item (MMCI) captured by the mobile device and a request to match relevant content to the MMCI are received. In S520, a signature to the MMCI is generated as discussed above. The signature for the MMCI is generated by the SGS 140 as described hereinabove. In S530, the MMCI together with the respective signature is stored in a data warehouse such as, for example, the database 160, for further use. The data warehouse may be native or cloud-based over the web.

In S540, sequentially relevant multimedia content is matched to the retrieved multimedia content item. It should be understood that one or more content items may be provided to the user respective to the request. The relevant content may be identified locally, on the mobile device, or over the network 110 through one or more of the web sources 150. Content may be determined to be relevant based on one or more parameters related to the user. Such parameters may relate to, e.g., the time of the day during which the request was received, the location of the mobile device, the weather in the location of the mobile device, etc. The user's parameters may be collected by one or more of the sensors integrated in the mobile device 610.

In S550, the one or more relevant content items are sent to the user device. According to one embodiment (not shown), the relevant content is then displayed on the mobile device.

In S560, it is checked whether there are additional requests, and if so, execution continues with S510; otherwise, execution terminates. It should be understood to one of ordinary skill in the art that the operation of capturing and analyzing the multimedia content item may be performed off-line without communicating with the server 130 through the network 110 in such cases where the SGS 140 is integrated within the mobile device 610.

As a non-limiting example, a user captures an image of a video clip displaying over the screen of the mobile device. The image is then analyzed and a signature is generated respective thereto. The image together with the respective signature is stored in a data warehouse. Upon receiving a request to match relevant content to the captured image, the captured image and the respective signature are retrieved from the data warehouse. The server 130 then generates the subsequent stream of the video clip and displays the full video clip over the mobile device.

The various embodiments disclosed herein can be implemented as hardware, firmware, software, or any combination thereof. Moreover, the software is preferably implemented as an application program tangibly embodied on a program storage unit or computer readable medium consisting of parts, or of certain devices and/or a combination of devices. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. Preferably, the machine is implemented on a computer platform having hardware such as one or more central processing units (“CPUs”), a memory, and input/output interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU, whether or not such a computer or processor is explicitly shown. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit. Furthermore, a non-transitory computer readable medium is any computer readable medium except for a transitory propagating signal.

All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.

Claims

1. A method for matching sequentially relevant content to multimedia content items (MMCIs) stored in a mobile device, comprising:

extracting at least one MMCI from the mobile device;
generating a signature for the extracted at least one MMCI;
matching the generated signature to a plurality of signatures of content items; and
determining, based on the matching, at least one sequentially relevant content item.

2. The method of claim 1, further comprising:

sending the determined at least one sequentially relevant content item to the mobile device.

3. The method of claim 1, wherein the at least one sequentially relevant content item is further determined based on at least one parameter related to a user of the mobile device.

4. The method of claim 3, wherein each parameter related to the user is any of: a time of day at which a request for sequentially relevant content items is received, a location of the user, and weather at the location of the mobile device.

5. The method of claim 3, wherein each parameter related to the user is collected by at least one sensor of the mobile device.

6. The method of claim 1, wherein each extracted MMCI is any of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, combinations thereof, and portions thereof.

7. The method of claim 1, wherein each determined sequentially relevant content item is successive to one of the extracted at least one MMCI.

8. The method of claim 1, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of computational cores configured to receive a plurality of unstructured data elements, each computational core of the plurality of computational cores having properties that are at least partly statistically independent of other of the computational cores, the properties are set independently of each other core.

9. The method of claim 1, wherein a signature of each determined sequentially relevant content item matches the generated signature above a predefined threshold.

10. A non-transitory computer readable medium having stored thereon instructions for causing a processing circuitry to performed a process comprising:

extracting at least one MMCI from the mobile device;
generating a signature for the extracted at least one MMCI;
matching the generated signature to a plurality of signatures of content items; and
determining, based on the matching, at least one sequentially relevant content item.

11. A system for matching sequentially relevant content to multimedia content items (MMCIs) stored in a mobile device, comprising:

a processing circuitry; and
a memory coupled to the processing unit, the memory containing instructions that, when executed by the processing unit, configure the system to:
extract at least one MMCI from the mobile device;
generate a signature for the extracted at least one MMCI;
match the generated signature to a plurality of signatures of content items; and
determine, based on the matching, at least one sequentially relevant content item.

12. The system of claim 11, wherein the system is further configured to:

send the determined at least one sequentially relevant content item to the mobile device.

13. The system of claim 11, wherein the at least one sequentially relevant content item is further determined based on at least one parameter related to a user of the mobile device.

14. The system of claim 13, wherein each parameter related to the user is any of: a time of day at which a request for sequentially relevant content items is received, a location of the user, and weather at the location of the mobile device.

15. The system of claim 13, wherein each parameter related to the user is collected by at least one sensor of the mobile device.

16. The system of claim 11, wherein each extracted MMCI is any of: an image, graphics, a video stream, a video clip, an audio stream, an audio clip, a video frame, a photograph, images of signals, combinations thereof, and portions thereof.

17. The system of claim 11, wherein each determined sequentially relevant content item is successive to one of the extracted at least one MMCI.

18. The system of claim 11, wherein each signature is generated by a signature generator system, wherein the signature generator system includes a plurality of computational cores configured to receive a plurality of unstructured data elements, each computational core of the plurality of computational cores having properties that are at least partly statistically independent of other of the computational cores, the properties are set independently of each other core.

19. The system of claim 11, wherein a signature of each determined sequentially relevant content item matches the generated signature above a predefined threshold.

Referenced Cited
U.S. Patent Documents
4733353 March 22, 1988 Jaswa
4932645 June 12, 1990 Schorey et al.
4972363 November 20, 1990 Nguyen et al.
5307451 April 26, 1994 Clark
5568181 October 22, 1996 Greenwood et al.
5806061 September 8, 1998 Chaudhuri et al.
5852435 December 22, 1998 Vigneaux et al.
5870754 February 9, 1999 Dimitrova et al.
5873080 February 16, 1999 Coden et al.
5887193 March 23, 1999 Takahashi et al.
5978754 November 2, 1999 Kumano
6052481 April 18, 2000 Grajski et al.
6076088 June 13, 2000 Paik et al.
6122628 September 19, 2000 Castelli et al.
6128651 October 3, 2000 Cezar
6137911 October 24, 2000 Zhilyaev
6144767 November 7, 2000 Bottou et al.
6147636 November 14, 2000 Gershenson
6243375 June 5, 2001 Speicher
6243713 June 5, 2001 Nelson et al.
6329986 December 11, 2001 Cheng
6381656 April 30, 2002 Shankman
6411229 June 25, 2002 Kobayashi
6422617 July 23, 2002 Fukumoto et al.
6523046 February 18, 2003 Liu et al.
6524861 February 25, 2003 Anderson
6550018 April 15, 2003 Abonamah et al.
6594699 July 15, 2003 Sahai et al.
6611628 August 26, 2003 Sekiguchi et al.
6618711 September 9, 2003 Ananth
6643620 November 4, 2003 Contolini et al.
6643643 November 4, 2003 Lee et al.
6665657 December 16, 2003 Dibachi
6704725 March 9, 2004 Lee
6732149 May 4, 2004 Kephart
6751363 June 15, 2004 Natsev et al.
6751613 June 15, 2004 Lee et al.
6754435 June 22, 2004 Kim
6763519 July 13, 2004 McColl et al.
6774917 August 10, 2004 Foote et al.
6795818 September 21, 2004 Lee
6804356 October 12, 2004 Krishnamachari
6819797 November 16, 2004 Smith et al.
6845374 January 18, 2005 Oliver et al.
6901207 May 31, 2005 Watkins
6938025 August 30, 2005 Lulich et al.
7006689 February 28, 2006 Kasutani
7013051 March 14, 2006 Sekiguchi et al.
7020654 March 28, 2006 Najmi
7043473 May 9, 2006 Rassool et al.
7047033 May 16, 2006 Wyler
7199798 April 3, 2007 Echigo et al.
7260564 August 21, 2007 Lynn et al.
7277928 October 2, 2007 Lennon
7302117 November 27, 2007 Sekiguchi et al.
7313805 December 25, 2007 Rosin et al.
7340458 March 4, 2008 Vaithilingam et al.
7353224 April 1, 2008 Chen et al.
7376672 May 20, 2008 Weare
7376722 May 20, 2008 Sim et al.
7433895 October 7, 2008 Li et al.
7464086 December 9, 2008 Black et al.
7526607 April 28, 2009 Singh et al.
7536417 May 19, 2009 Walsh et al.
7574668 August 11, 2009 Nunez et al.
7577656 August 18, 2009 Kawai et al.
7657100 February 2, 2010 Gokturk et al.
7660468 February 9, 2010 Gokturk et al.
7660737 February 9, 2010 Lim et al.
7694318 April 6, 2010 Eldering et al.
7697791 April 13, 2010 Chan et al.
7769221 August 3, 2010 Shakes et al.
7788132 August 31, 2010 Desikan et al.
7836054 November 16, 2010 Kawai et al.
7860895 December 28, 2010 Scofield et al.
7904503 March 8, 2011 Van De Sluis
7920894 April 5, 2011 Wyler
7921107 April 5, 2011 Chang et al.
7974994 July 5, 2011 Li et al.
7987194 July 26, 2011 Walker et al.
7987217 July 26, 2011 Long et al.
7991715 August 2, 2011 Schiff et al.
8000655 August 16, 2011 Wang et al.
8036893 October 11, 2011 Reich
8098934 January 17, 2012 Vincent et al.
8112376 February 7, 2012 Raichelgauz
8266185 September 11, 2012 Raichelgauz et al.
8312031 November 13, 2012 Raichelgauz et al.
8315442 November 20, 2012 Gokturk et al.
8316005 November 20, 2012 Moore
8326775 December 4, 2012 Raichelgauz et al.
8345982 January 1, 2013 Gokturk et al.
8548828 October 1, 2013 Longmire
8655801 February 18, 2014 Raichelgauz et al.
8677377 March 18, 2014 Cheyer et al.
8682667 March 25, 2014 Haughay
8688446 April 1, 2014 Yanagihara
8706503 April 22, 2014 Cheyer et al.
8775442 July 8, 2014 Moore et al.
8799195 August 5, 2014 Raichelgauz
8799196 August 5, 2014 Raichelquaz
8818916 August 26, 2014 Raichelgauz
8868619 October 21, 2014 Raichelgauz
8880539 November 4, 2014 Raichelgauz
8880566 November 4, 2014 Raichelgauz
8886648 November 11, 2014 Procopio et al.
8898568 November 25, 2014 Bull et al.
8922414 December 30, 2014 Raichelgauz et al.
8959037 February 17, 2015 Raichelgauz
8990125 March 24, 2015 Raichelgauz
9009086 April 14, 2015 Raichelgauz
9031999 May 12, 2015 Raichelgauz
9087049 July 21, 2015 Raichelgauz et al.
9104747 August 11, 2015 Raichelgauz et al.
9191626 November 17, 2015 Raichelgauz et al.
9197244 November 24, 2015 Raichelgauz et al.
9218606 December 22, 2015 Raichelgauz et al.
9235557 January 12, 2016 Raichelgauz
9256668 February 9, 2016 Raichelgauz et al.
9330189 May 3, 2016 Raichelgauz
9438270 September 6, 2016 Raichelgauz et al.
20010019633 September 6, 2001 Tenze et al.
20010056427 December 27, 2001 Yoon et al.
20020019881 February 14, 2002 Bokhari et al.
20020038299 March 28, 2002 Zernik et al.
20020059580 May 16, 2002 Kalker et al.
20020087530 July 4, 2002 Smith et al.
20020099870 July 25, 2002 Miller et al.
20020107827 August 8, 2002 Benitez-Jimenez et al.
20020123928 September 5, 2002 Eldering et al.
20020126872 September 12, 2002 Brunk et al.
20020129296 September 12, 2002 Kwiat et al.
20020143976 October 3, 2002 Barker et al.
20020152267 October 17, 2002 Lennon
20020157116 October 24, 2002 Jasinschi
20020159640 October 31, 2002 Vaithilingam et al.
20020161739 October 31, 2002 Oh
20020163532 November 7, 2002 Thomas et al.
20020174095 November 21, 2002 Lulich et al.
20020178410 November 28, 2002 Haitsma et al.
20030028660 February 6, 2003 Igawa et al.
20030041047 February 27, 2003 Chang et al.
20030050815 March 13, 2003 Seigel et al.
20030078766 April 24, 2003 Appelt et al.
20030086627 May 8, 2003 Berriss et al.
20030126147 July 3, 2003 Essafi et al.
20030182567 September 25, 2003 Barton et al.
20030191764 October 9, 2003 Richards
20030200217 October 23, 2003 Ackerman
20030217335 November 20, 2003 Chung et al.
20040003394 January 1, 2004 Ramaswamy
20040025180 February 5, 2004 Begeja et al.
20040068510 April 8, 2004 Hayes et al.
20040107181 June 3, 2004 Rodden
20040111465 June 10, 2004 Chuang et al.
20040117367 June 17, 2004 Smith et al.
20040128142 July 1, 2004 Whitham
20040128511 July 1, 2004 Sun et al.
20040133927 July 8, 2004 Sternberg et al.
20040153426 August 5, 2004 Nugent
20040215663 October 28, 2004 Liu et al.
20040249779 December 9, 2004 Nauck et al.
20040260688 December 23, 2004 Gross
20040267774 December 30, 2004 Lin et al.
20050131884 June 16, 2005 Gross et al.
20050144455 June 30, 2005 Haitsma
20050177372 August 11, 2005 Wang et al.
20050238238 October 27, 2005 Xu et al.
20050245241 November 3, 2005 Durand et al.
20050281439 December 22, 2005 Lange
20060004745 January 5, 2006 Kuhn et al.
20060013451 January 19, 2006 Haitsma
20060020860 January 26, 2006 Tardif et al.
20060020958 January 26, 2006 Allamanche et al.
20060026203 February 2, 2006 Tan et al.
20060031216 February 9, 2006 Semple et al.
20060041596 February 23, 2006 Stirbu et al.
20060048191 March 2, 2006 Xiong
20060064037 March 23, 2006 Shalon et al.
20060112035 May 25, 2006 Cecchi et al.
20060129822 June 15, 2006 Snijder et al.
20060143674 June 29, 2006 Jones et al.
20060153296 July 13, 2006 Deng
20060159442 July 20, 2006 Kim et al.
20060173688 August 3, 2006 Whitham
20060184638 August 17, 2006 Chua et al.
20060204035 September 14, 2006 Guo et al.
20060217818 September 28, 2006 Fujiwara
20060224529 October 5, 2006 Kermani
20060236343 October 19, 2006 Chang
20060242139 October 26, 2006 Butterfield et al.
20060242554 October 26, 2006 Gerace et al.
20060247983 November 2, 2006 Dalli
20060248558 November 2, 2006 Barton et al.
20060253423 November 9, 2006 McLane et al.
20070019864 January 25, 2007 Koyama et al.
20070042757 February 22, 2007 Jung et al.
20070061302 March 15, 2007 Ramer et al.
20070067304 March 22, 2007 Ives
20070067682 March 22, 2007 Fang
20070071330 March 29, 2007 Oostveen et al.
20070074147 March 29, 2007 Wold
20070091106 April 26, 2007 Moroney
20070130159 June 7, 2007 Gulli et al.
20070168413 July 19, 2007 Barletta et al.
20070195987 August 23, 2007 Rhoads
20070220573 September 20, 2007 Chiussi et al.
20070244902 October 18, 2007 Seide et al.
20070253594 November 1, 2007 Lu et al.
20070255785 November 1, 2007 Hayashi et al.
20070294295 December 20, 2007 Finkelstein et al.
20080040277 February 14, 2008 DeWitt
20080046406 February 21, 2008 Seide et al.
20080049629 February 28, 2008 Morrill
20080072256 March 20, 2008 Boicey et al.
20080091527 April 17, 2008 Silverbrook et al.
20080152231 June 26, 2008 Gokturk et al.
20080163288 July 3, 2008 Ghosal et al.
20080165861 July 10, 2008 Wen et al.
20080201299 August 21, 2008 Lehikoinen et al.
20080201314 August 21, 2008 Smith et al.
20080204706 August 28, 2008 Magne et al.
20080253737 October 16, 2008 Kimura et al.
20080270373 October 30, 2008 Oostveen et al.
20080313140 December 18, 2008 Pereira et al.
20090013414 January 8, 2009 Washington et al.
20090022472 January 22, 2009 Bronstein et al.
20090089587 April 2, 2009 Brunk et al.
20090119157 May 7, 2009 Dulepet
20090125529 May 14, 2009 Vydiswaran et al.
20090125544 May 14, 2009 Brindley
20090148045 June 11, 2009 Lee et al.
20090157575 June 18, 2009 Schobben et al.
20090172030 July 2, 2009 Schiff et al.
20090175538 July 9, 2009 Bronstein et al.
20090204511 August 13, 2009 Tsang
20090216639 August 27, 2009 Kapczynski et al.
20090245573 October 1, 2009 Saptharishi et al.
20090245603 October 1, 2009 Koruga et al.
20090253583 October 8, 2009 Yoganathan
20090277322 November 12, 2009 Cai et al.
20100023400 January 28, 2010 DeWitt
20100042646 February 18, 2010 Raichelgauz et al.
20100082684 April 1, 2010 Churchill et al.
20100088321 April 8, 2010 Solomon et al.
20100104184 April 29, 2010 Bronstein et al.
20100106857 April 29, 2010 Wyler
20100125569 May 20, 2010 Nair et al.
20100162405 June 24, 2010 Cook et al.
20100173269 July 8, 2010 Puri et al.
20100191567 July 29, 2010 Lee et al.
20100268524 October 21, 2010 Nath et al.
20100306193 December 2, 2010 Pereira et al.
20100318493 December 16, 2010 Wessling
20100322522 December 23, 2010 Wang et al.
20110035289 February 10, 2011 King et al.
20110052063 March 3, 2011 McAuley et al.
20110055585 March 3, 2011 Lee
20110106782 May 5, 2011 Ke et al.
20110145068 June 16, 2011 King et al.
20110202848 August 18, 2011 Ismalon
20110208822 August 25, 2011 Rathod
20110246566 October 6, 2011 Kashef et al.
20110251896 October 13, 2011 Impollonia et al.
20110313856 December 22, 2011 Cohen et al.
20120082362 April 5, 2012 Diem et al.
20120131454 May 24, 2012 Shah
20120150890 June 14, 2012 Jeong et al.
20120167133 June 28, 2012 Carroll et al.
20120197857 August 2, 2012 Huang et al.
20120330869 December 27, 2012 Durham
20130031489 January 31, 2013 Gubin et al.
20130067035 March 14, 2013 Amanat et al.
20130086499 April 4, 2013 Dyor et al.
20130089248 April 11, 2013 Remiszewski et al.
20130104251 April 25, 2013 Moore et al.
20130159298 June 20, 2013 Mason et al.
20130173635 July 4, 2013 Sanjeev
20130325550 December 5, 2013 Varghese et al.
20130332951 December 12, 2013 Gharaat et al.
20140019264 January 16, 2014 Wachman et al.
20140147829 May 29, 2014 Jerauld
20140188786 July 3, 2014 Raichelgauz et al.
20140310825 October 16, 2014 Raichelgauz et al.
20150289022 October 8, 2015 Gross
Foreign Patent Documents
0231764 April 2002 WO
2003005242 January 2003 WO
2004019527 March 2004 WO
2007049282 May 2007 WO
Other references
  • The potential of social-aware multimedia prefetching on mobile devices Stefan Wilk; Julius Rückert; Timo Thräm; Christian Koch; Wolfgang Effelsberg; David Hausheer 2015 International Conference and Workshops on Networked Systems (NetSys) Year: 2015 pp. 1-5, DOI: 10.1109/NetSys.2015.7089081 IEEE Conference Publications.
  • Diversity decay in opportunistic content sharing systems Liam McNamara; Salvatore Scellato; Cecilia Mascolo 2011 IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks Year: 2011 pp. 1-3, DOI: 10.1109/WoWMoM.2011.5986211 IEEE Conference Publications.
  • Semantic web service adaptation model for a pervasive learning scenario B-Y-S. Lau; C. Pham-Nguyen; C-S. Lee; S. Garlatti 2008 IEEE Conference on Innovative Technologies in Intelligent Systems and Industrial Applications Year: 2008 pp. 98-103, DOI: 10.1109/CITISIA.2008.4607342 IEEE Conference Publications.
  • SCORM-MPEG: An ontology of interoperable metadata for multimedia and e-Learning Marcelo Correia Santos; Yuzo lano 2015 23rd International Conference on Software, Telecommunications and Computer Networks (SoftCOM) Year: 2015 pp. 224-228, DOI: 10.1109/SOFTCOM.2015.7314122 IEEE Conference Publications.
  • Clement, et al. “Speaker Diarization of Heterogeneous Web Video Files: A Preliminary Study”, Acoustics, Speech and Signal Processing (ICASSP), 2011, IEEE International Conference on Year: 2011, pp. 4432-4435, DOI: 10.1109/ICASSP.2011.5947337 IEEE Conference Publications, France.
  • Gong, et al., “A Knowledge-based Mediator for Dynamic Integration of Heterogeneous Multimedia Information Sources”, Video and Speech Processing, 2004, Proceedings of 2004 International Symposium on Year: 2004, pp. 467-470, DOI: 10.1109/ISIMP.2004.1434102 IEEE Conference Publications, Hong Kong.
  • Lin, et al., “Summarization of Large Scale Social Network Activity”, Acoustics, Speech and Signal Processing, 2009, ICASSP 2009, IEEE International Conference on Year 2009, pp. 3481-3484, DOI: 10.1109/ICASSP.2009.4960375, IEEE Conference Publications, Arizona.
  • Nouza, et al., “Large-scale Processing, Indexing and Search System for Czech Audio-Visual Heritage Archives”, Multimedia Signal Processing (MMSP), 2012, pp. 337-342, IEEE 14th Intl. Workshop, DOI: 10.1109/MMSP.2012.6343465, Czech Republic.
  • Boari et al, “Adaptive Routing for Dynamic Applications in Massively Parallel Architectures”, 1995 IEEE, Spring 1995.
  • Burgsteiner et al.: “Movement Prediction From Real-World Images Using a Liquid State Machine”, Innovations in Applied Artificial Intelligence Lecture Notes in Computer Science, Lecture Notes in Artificial Intelligence, LNCS, Springer-Verlag, BE, vol. 3533, Jun. 2005, pp. 121-130.
  • Cernansky et al., “Feed-forward Echo State Networks”; Proceedings of International Joint Conference on Neural Networks, Montreal, Canada, Jul. 31-Aug. 4, 2005; Entire Document.
  • Cococcioni, et al, “Automatic Diagnosis of Defects of Rolling Element Bearings Based on Computational Intelligence Techniques”, University of Pisa, Pisa, Italy, 2009.
  • Emami, et al, “Role of Spatiotemporal Oriented Energy Features for Robust Visual Tracking in Video Surveillance, University of Queensland”, St. Lucia, Australia, 2012.
  • Fathy et al., “A Parallel Design and Implementation for Backpropagation Neural Network Using NIMD Architecture”, 8th Mediterranean Electrotechnical Corsfe rersce, 19'96. MELECON '96, Date of Conference: May 13-16, 1996, vol. 3, pp. 1472-1475.
  • Foote, Jonathan, et al. “Content-Based Retrieval of Music and Audio”, 1997 Institute of Systems Science, National University of Singapore, Singapore (Abstract).
  • Freisleben et al., “Recognition of Fractal Images Using a Neural Network”, Lecture Notes in Computer Science, 1993, vol. 6861, 1993, pp. 631-637.
  • Garcia, “Solving the Weighted Region Least Cost Path Problem Using Transputers”, Naval Postgraduate School, Monterey, California, Dec. 1989.
  • Guo et al, “AdOn: An Intelligent Overlay Video Advertising System”, SIGIR, Boston, Massachusetts, Jul. 19-23, 2009.
  • Howlett et al., “A Multi-Computer Neural Network Architecture in a Virtual Sensor System Application”, International Journal of Knowledge-based Intelligent Engineering Systems, 4 (2). pp. 86-93, 133N 1327-2314; first submitted Nov. 30, 1999; revised version submitted Mar. 10, 2000.
  • International Search Authority: “Written Opinion of the International Searching Authority” (PCT Rule 43bis.1) including International Search Report for International Patent Application No. PCT/US2008/073852; Date of Mailing: Jan. 28, 2009.
  • International Search Authority: International Preliminary Report on Patentability (Chapter I of the Patent Cooperation Treaty) including “Written Opinion of the International Searching Authority” (PCT Rule 43bis. 1) for the corresponding International Patent Application No. PCT/IL2006/001235; Date of Issuance: Jul. 28, 2009.
  • International Search Report for the corresponding International Patent Application PCT/IL2006/001235; Date of Mailing: Nov. 2, 2008.
  • IPO Examination Report under Section 18(3) for corresponding UK application No. GB1001219.3, dated May 30, 2012.
  • IPO Examination Report under Section 18(3) for corresponding UK application No. GB1001219.3, dated Sep. 12, 2011; Entire Document.
  • Iwamoto, K.; Kasutani, E.; Yamada, A.: “Image Signature Robust to Caption Superimposition for Video Sequence Identification”; 2006 IEEE International Conference on Image Processing; pp. 3185-3188, Oct. 8-11, 2006; doi: 10.1109/ICIP.2006.313046.
  • Jaeger, H.: “The “echo state” approach to analysing and training recurrent neural networks”, GMD Report, No. 148, 2001, pp. 1-43, XP002466251 German National Research Center for Information Technology.
  • Lin, C.; Chang, S.: “Generating Robust Digital Signature for Image/Video Authentication”, Multimedia and Security Workshop at ACM Mutlimedia '98; Bristol, U.K., Sep. 1998; pp. 49-54.
  • Liu, et al., “Instant Mobile Video Search With Layered Audio-Video Indexing and Progressive Transmission”, Multimedia, IEEE Transactions on Year: 2014, vol. 16, Issue: 8, pp. 2242-2255, DOI: 10.1109/TMM.2014.2359332 IEEE Journals & Magazines.
  • Lyon, Richard F.; “Computational Models of Neural Auditory Processing”; IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP '84, Date of Conference: Mar. 1984, vol. 9, pp. 41-44.
  • Maass, W. et al.: “Computational Models for Generic Cortical Microcircuits”, Institute for Theoretical Computer Science, Technische Universitaet Graz, Graz, Austria, published Jun. 10, 2003.
  • Mandhaoui, et al, “Emotional Speech Characterization Based on Multi-Features Fusion for Face-to-Face Interaction”, Universite Pierre et Marie Curie, Paris, France, 2009.
  • Marti, et al, “Real Time Speaker Localization and Detection System for Camera Steering in Multiparticipant Videoconferencing Environments”, Universidad Politecnica de Valencia, Spain, 2011.
  • Mei, et al., “Contextual In-Image Advertising”, Microsoft Research Asia, pp. 439-448, 2008.
  • Mei, et al., “VideoSense—Towards Effective Online Video Advertising”, Microsoft Research Asia, pp. 1075-1084, 2007.
  • Mladenovic, et al., “Electronic Tour Guide for Android Mobile Platform with Multimedia Travel Book”, Telecommunications Forum (TELFOR), 2012 20th Year: 2012, pp. 1460-1463, DOI: 10.1109/TELFOR.2012.6419494 IEEE Conference Publications.
  • Morad, T.Y. et al.: “Performance, Power Efficiency and Scalability of Asymmetric Cluster Chip Multiprocessors”, Computer Architecture Letters, vol. 4, Jul. 4, 2005 (Jul. 4, 2005), pp. 1-4, XP002466254.
  • Nagy et al, “A Transputer, Based, Flexible, Real-Time Control System for Robotic Manipulators”, UKACC International Conference on Control '96, Sep. 2-5, 1996, Conference 1996, Conference Publication No. 427, IEE 1996.
  • Natsclager, T. et al.: “The “liquid computer”: A novel strategy for real-time computing on time series”, Special Issue on Foundations of Information Processing of Telennatik, vol. 8, No. 1, 2002, pp. 39-43, XP002466253.
  • Ortiz-Boyer et al., “CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features”, Journal of Artificial Intelligence Research 24 (2005), pp. 1-48 Submitted Nov. 2004; published Jul. 2005.
  • Park, et al., “Compact Video Signatures for Near-Duplicate Detection on Mobile Devices”, Consumer Electronics (ISCE 2014), The 18th IEEE International Symposium on Year: 2014, pp. 1-2, DOI: 10.1109/ISCE.2014.6884293 IEEE Conference Publications.
  • Ribert et al. “An Incremental Hierarchical Clustering”, Visicon Interface 1999, pp. 586-591.
  • Scheper et al, “Nonlinear dynamics in neural computation”, ESANN'2006 proceedings—European Symposium on Artificial Neural Networks, Bruges (Belgium), Apr. 26-28, 2006, d-side publi, ISBN 2-930307-06-4.
  • Semizarov et al. “Specificity of Short Interfering RNA Determined through Gene Expression Signatures”, PNAS, 2003, pp. 6347-6352.
  • Theodoropoulos et al, “Simulating Asynchronous Architectures on Transputer Networks”, Proceedings of the Fourth Euromicro Workshop on Parallel and Distributed Processing, 1996. PDP '96.
  • Verstraeten et al., “Isolated word recognition with the Liquid State Machine: a case study”; Department of Electronics and Information Systems, Ghent University, Sint-Pietersnieuwstraat 41, 9000 Gent, Belgium, Available online Jul. 14, 2005; Entire Document.
  • Verstraeten et al.: “Isolated word recognition with the Liquid State Machine: a case study”, Information Processing Letters, Amsterdam, NL, vol. 95, No. 6, Sep. 30, 2005 (Sep. 30, 2005), pp. 521-528, XP005028093 ISSN: 0020-0190.
  • Wang et al. “A Signature for Content-based Image Retrieval Using a Geometrical Transform”, ACM 1998, pp. 229-234.
  • Ware et al., “Locating and Identifying Components in a Robot's Workspace using a Hybrid Computer Architecture”; Proceedings of the 1995 IEEE International Symposium on Intelligent Control, Aug. 27-29, 1995, pp. 139-144.
  • Xian-Sheng Hua et al.: “Robust Video Signature Based on Ordinal Measure” In: 2004 International Conference on Image Processing, ICIP '04; Microsoft Research Asia, Beijing, China; published Oct. 24-27, 2004, pp. 685-688.
  • Zang, et al., “A New Multimedia Message Customizing Framework for Mobile Devices”, Multimedia and Expo, 2007 IEEE International Conference on Year: 2007, pp. 1043-1046, DOI: 10.1109/ICME.2007.4284832 IEEE Conference Publications.
  • Zeevi, Y. et al.: “Natural Signal Classification by Neural Cliques and Phase-Locked Attractors”, IEEE World Congress on Computational Intelligence, IJCNN2006, Vancouver, Canada, Jul. 2006 (Jul. 2006), XP002466252.
  • Zhou et al., “Ensembling neural networks: Many could be better than all”; National Laboratory for Novel Software Technology, Nanjing Unviersirty, Hankou Road 22, Nanjing 210093, PR China; Received Nov. 16, 2001, Available online Mar. 12, 2002; Entire Document.
  • Zhou et al., “Medical Diagnosis With C4.5 Rule Preceded by Artificial Neural Network Ensemble”; IEEE Transactions on Information Technology in Biomedicine, vol. 7, Issue: 1, pp. 37-42, Date of Publication: Mar. 2003.
  • Li, et al., “Matching Commercial Clips from TV Streams Using a Unique, Robust and Compact Signature,” Proceedings of the Digital Imaging Computing: Techniques and Applications, Feb. 2005, vol. 0-7695-2467, Australia.
  • Lin, et al., “Robust Digital Signature for Multimedia Authentication: A Summary”, IEEE Circuits and Systems Magazine, 4th Quarter 2003, pp. 23-26.
  • May et al., “The Transputer”, Springer-Verlag, Berlin Heidelberg, 1989, teaches multiprocessing system.
  • Nam, et al., “Audio Visual Content-Based Violent Scene Characterization”, Department of Electrical and Computer Engineering, Minneapolis, MN, 1998, pp. 353-357.
  • Vailaya, et al., “Content-Based Hierarchical Classification of Vacation Images,” I.E.E.E.: Multimedia Computing and Systems, vol. 1, 1999, East Lansing, MI, pp. 518-523.
  • Vallet, et al., “Personalized Content Retrieval in Context Using Ontological Knowledge,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, No. 3, Mar. 2007, pp. 336-346.
  • Whitby-Strevens, “The Transputer”, 1985 IEEE, Bristol, UK.
  • Yanai, “Generic Image Classification Using Visual Knowledge on the Web,” MM'03, Nov. 2-8, 2003, Tokyo, Japan, pp. 167-176.
  • Chuan-Yu Cho, et al., “Efficient Motion-Vector-Based Video Search Using Query by Clip”, 2004, IEEE, Taiwan, pp. 1-4.
  • Gomes et al., “Audio Watermaking and Fingerprinting: For Which Applications?” University of Rene Descartes, Paris, France, 2003.
  • Ihab Al Kabary, et al., “SportSense: Using Motion Queries to Find Scenes in Sports Videos”, Oct. 2013, ACM, Switzerland, pp. 1-3.
  • Jianping Fan et al., “Concept-Oriented Indexing of Video Databases: Towards Semantic Sensitive Retrieval and Browsing”, IEEE, vol. 13, No. 7, Jul. 2004, pp. 1-19.
  • Shih-Fu Chang, et al., “VideoQ: A Fully Automated Video Retrieval System Using Motion Sketches”, 1998, IEEE, , New York, pp. 1-2.
  • Wei-Te Li et al., “Exploring Visual and Motion Saliency for Automatic Video Object Extraction”, IEEE, vol. 22, No. 7, Jul. 2013, pp. 1-11.
  • Zhu et al., Technology-Assisted Dietary Assessment. Computational Imaging VI, edited by Charles A. Bouman, Eric L. Miller, Ilya Pollak, Proc. of SPIE-IS&T Electronic Imaging, SPIE vol. 6814, 681411, Copyright 2008 SPIE-IS&T. pp. 1-10.
  • Brecheisen, et al., “Hierarchical Genre Classification for Large Music Collections”, ICME 2006, pp. 1385-1388.
  • Odinaev, et al., “Cliques in Neural Ensembles as Perception Carriers”, Technion—Israel Institute of Technology, 2006 International Joint Conference on Neural Networks, Canada, 2006, pp. 285-292.
Patent History
Patent number: 9646006
Type: Grant
Filed: Mar 29, 2016
Date of Patent: May 9, 2017
Patent Publication Number: 20160210284
Assignee: Cortica, Ltd. (Tel Aviv)
Inventors: Igal Raichelgauz (New York, NY), Karina Odinaev (New York, NY), Yehoshua Y Zeevi (Haifa)
Primary Examiner: Michael B Holmes
Application Number: 15/084,083
Classifications
Current U.S. Class: Knowledge Representation And Reasoning Technique (706/46)
International Classification: G06E 1/00 (20060101); G06E 3/00 (20060101); G06F 15/18 (20060101); G06G 7/00 (20060101); G06F 17/30 (20060101); G06N 3/063 (20060101);