WEARABLE APPARATUS AND METHOD FOR SELECTIVELY PROCESSING IMAGE DATA

- ORCAM TECHNOLOGIES LTD.

A wearable apparatus and method are provided for capturing image data. In one implementation, a wearable apparatus for selectively processing images is provided. The wearable apparatus includes an image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus also includes at least one processing device programmed to access at least one rule for classifying images. The at least one processing device is also programmed to classify, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images. The at least one processing device is further programmed to delete at least some of the auxiliary images.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
CROSS REFERENCES TO RELATED APPLICATIONS

This application claims the benefit of priority of U.S. Provisional Patent Application No. 62/027,936, filed on Jul. 23, 2014, and U.S. Provisional Patent Application No. 62/027,957, filed on Jul. 23, 2014, all of which are incorporated herein by reference in their entirety.

BACKGROUND

I. Technical Field

This disclosure generally relates to devices and methods for capturing and processing images from an environment of a user. More particularly, this disclosure relates to devices and methods for selectively processing images.

II. Background Information

Today, technological advancements make it possible for wearable devices to automatically capture images and store information that is associated with the captured images. Certain devices have been used to digitally record aspects and personal experiences of one's life in an exercise typically called “lifelogging.” Some individuals log their life so they can retrieve moments from past activities, for example, social events, trips, etc. Lifelogging may also have significant benefits in other fields (e.g., business, fitness and healthcare, and social research). Lifelogging devices, while useful for tracking daily activities, may be improved with capability to enhance one's interaction in his environment with feedback and other advanced functionality based on the analysis of captured image data.

Even though users can capture images with their smartphones and some smartphone applications can process the captured images, smartphones may not be the best platform for serving as lifelogging apparatuses in view of their size and design. Lifelogging apparatuses should be small and light, so they can be easily worn. Moreover, with improvements in image capture devices, including wearable apparatuses, additional functionality may be provided to assist users in navigating in and around an environment. Therefore, there is a need for apparatuses and methods for automatically capturing and processing images in a manner that provides useful information to users of the apparatuses.

SUMMARY

Embodiments consistent with the present disclosure provide an apparatus and methods for distinguishing between different types of captured image data.

In accordance with a disclosed embodiment, a wearable apparatus for selectively processing images is provided. The wearable apparatus includes an image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus also includes at least one processing device programmed to access at least one rule for classifying images. The at least one processing device is also programmed to classify, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images. The at least one processing device is further programmed to delete at least some of the auxiliary images.

Consistent with another disclosed embodiment, a wearable apparatus for selectively processing images is provided. The wearable apparatus includes an image sensor configured to capture a plurality of images from an environment of a user. The wearable apparatus also includes at least one processing device programmed to access at least one rule for classifying images. The at least one processing device is also programmed to classify, according to the at least one rule, a plurality of images as key images. The at least one processing device is also programmed to identify, in at least one of the key images, a visual trigger associated with a private contextual situation. The at least one processing device is further programmed to delete the at least one of the key images that includes the visual trigger associated with the private contextual situation.

Consistent with yet another disclosed embodiment, a method for selectively processing images is provided. The method includes processing a plurality of images captured by at least one image sensor included in a wearable apparatus. The method also includes accessing at least one rule for classifying images. The method also includes classifying, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images. The method further includes deleting at least some of the auxiliary images.

Consistent with other disclosed embodiments, non-transitory computer-readable storage media may store program instructions, which are executed by at least one processor and perform any of the methods described herein.

The foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the claims.

BRIEF DESCRIPTION OF THE DRAWINGS

The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate various disclosed embodiments. In the drawings:

FIG. 1A is a schematic illustration of an example of a user wearing a wearable apparatus according to a disclosed embodiment.

FIG. 1B is a schematic illustration of an example of the user wearing a wearable apparatus according to a disclosed embodiment.

FIG. 1C is a schematic illustration of an example of the user wearing a wearable apparatus according to a disclosed embodiment.

FIG. 1D is a schematic illustration of an example of the user wearing a wearable apparatus according to a disclosed embodiment.

FIG. 2 is a schematic illustration of an example system consistent with the disclosed embodiments.

FIG. 3A is a schematic illustration of an example of the wearable apparatus shown in FIG. 1A.

FIG. 3B is an exploded view of the example of the wearable apparatus shown in FIG. 3A.

FIG. 4A is a schematic illustration of an example of the wearable apparatus shown in FIG. 1B from a first viewpoint.

FIG. 4B is a schematic illustration of the example of the wearable apparatus shown in FIG. 1B from a second viewpoint.

FIG. 5A is a block diagram illustrating an example of the components of a wearable apparatus according to a first embodiment.

FIG. 5B is a block diagram illustrating an example of the components of a wearable apparatus according to a second embodiment.

FIG. 5C is a block diagram illustrating an example of the components of a wearable apparatus according to a third embodiment.

FIG. 6 is a block diagram illustrating an example memory storing a plurality of modules and databases.

FIG. 7 shows an example environment including a wearable apparatus for capturing and processing images.

FIG. 8 shows an example database table for storing information associated with key images.

FIG. 9 is a flowchart illustrating an example method for selectively processing images

FIG. 10 is a flowchart illustrating an example method for selectively processing images

FIG. 11 is a block diagram illustrating a memory storing a plurality of modules and databases.

FIG. 12 is a flowchart illustrating an example method for selectively processing images.

FIG. 13 is a block diagram illustrating a memory storing a plurality of modules and databases.

FIG. 14 shows an example environment including a wearable apparatus for capturing and processing images.

FIG. 15 is a flowchart illustrating an example method for selectively processing images.

DETAILED DESCRIPTION

The following detailed description refers to the accompanying drawings. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same or similar parts. While several illustrative embodiments are described herein, modifications, adaptations and other implementations are possible. For example, substitutions, additions or modifications may be made to the components illustrated in the drawings, and the illustrative methods described herein may be modified by substituting, reordering, removing, or adding steps to the disclosed methods. Accordingly, the following detailed description is not limited to the disclosed embodiments and examples. Instead, the proper scope is defined by the appended claims.

FIG. 1A illustrates a user 100 wearing an apparatus 110 that is physically connected (or integral) to glasses 130, consistent with the disclosed embodiments. Glasses 130 may be prescription glasses, magnifying glasses, non-prescription glasses, safety glasses, sunglasses, etc. Additionally, in some embodiments, glasses 130 may include parts of a frame and earpieces, nosepieces, etc., and one or more lenses. Thus, in some embodiments, glasses 130 may function primarily to support apparatus 110, and/or an augmented reality display device or other optical display device. In some embodiments, apparatus 110 may include an image sensor (not shown in FIG. 1A) for capturing real-time image data of the field-of-view of user 100. The term “image data” includes any form of data retrieved from optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums. The image data may include video clips and/or photographs.

In some embodiments, apparatus 110 may communicate wirelessly or via a wire with a computing device 120. In some embodiments, computing device 120 may include, for example, a smartphone, or a tablet, or a dedicated processing unit, which may be portable (e.g., can be carried in a pocket of user 100). Although shown in FIG. 1A as an external device, in some embodiments, computing device 120 may be provided as part of wearable apparatus 110 or glasses 130, whether integral thereto or mounted thereon. In some embodiments, computing device 120 may be included in an augmented reality display device or optical head mounted display provided integrally or mounted to glasses 130. In other embodiments, computing device 120 may be provided as part of another wearable or portable apparatus of user 100 including a wrist-strap, a multifunctional watch, a button, a clip-on, etc. And in other embodiments, computing device 120 may be provided as part of another system, such as an on-board automobile computing or navigation system. A person skilled in the art can appreciate that different types of computing devices and arrangements of devices may implement the functionality of the disclosed embodiments. Accordingly, in other implementations, computing device 120 may include a Personal Computer (PC), laptop, an Internet server, etc.

FIG. 1B illustrates user 100 wearing apparatus 110 that is physically connected to a necklace 140, consistent with a disclosed embodiment. Such a configuration of apparatus 110 may be suitable for users that do not wear glasses some or all of the time. In this embodiment, user 100 can easily wear apparatus 110, and take it off.

FIG. 1C illustrates user 100 wearing apparatus 110 that is physically connected to a belt 150, consistent with a disclosed embodiment. Such a configuration of apparatus 110 may be designed as a belt buckle. Alternatively, apparatus 110 may include a clip for attaching to various clothing articles, such as belt 150, or a vest, a pocket, a collar, a cap or hat or other portion of a clothing article.

FIG. 1D illustrates user 100 wearing apparatus 110 that is physically connected to a wrist strap 160, consistent with a disclosed embodiment. Although the aiming direction of apparatus 110, according to this embodiment, may not match the field-of-view of user 100, apparatus 110 may include the ability to identify a hand-related trigger based on the tracked eye movement of a user 100 indicating that user 100 is looking in the direction of the wrist strap 160. Wrist strap 160 may also include an accelerometer, a gyroscope, or other sensor for determining movement or orientation of a user's 100 hand for identifying a hand-related trigger.

FIG. 2 is a schematic illustration of an exemplary system 200 including a wearable apparatus 110, worn by user 100, and an optional computing device 120 and/or a server 250 capable of communicating with apparatus 110 via a network 240, consistent with disclosed embodiments. In some embodiments, apparatus 110 may capture and analyze image data, identify a hand-related trigger present in the image data, and perform an action and/or provide feedback to a user 100, based at least in part on the identification of the hand-related trigger. In some embodiments, optional computing device 120 and/or server 250 may provide additional functionality to enhance interactions of user 100 with his or her environment, as described in greater detail below.

According to the disclosed embodiments, apparatus 110 may include an image sensor system 220 for capturing real-time image data of the field-of-view of user 100. In some embodiments, apparatus 110 may also include a processing unit 210 for controlling and performing the disclosed functionality of apparatus 110, such as to control the capture of image data, analyze the image data, and perform an action and/or output a feedback based on a hand-related trigger identified in the image data. According to the disclosed embodiments, a hand-related trigger may include a gesture performed by user 100 involving a portion of a hand of user 100. Further, consistent with some embodiments, a hand-related trigger may include a wrist-related trigger. Additionally, in some embodiments, apparatus 110 may include a feedback outputting unit 230 for producing an output of information to user 100.

As discussed above, apparatus 110 may include an image sensor 220 for capturing image data. The term “image sensor” refers to a device capable of detecting and converting optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums into electrical signals. The electrical signals may be used to form an image or a video stream (i.e. image data) based on the detected signal. The term “image data” includes any form of data retrieved from optical signals in the near-infrared, infrared, visible, and ultraviolet spectrums. Examples of image sensors may include semiconductor charge-coupled devices (CCD), active pixel sensors in complementary metal-oxide-semiconductor (CMOS), or N-type metal-oxide-semiconductor (NMOS, Live MOS). In some cases, image sensor 220 may be part of a camera included in apparatus 110.

Apparatus 110 may also include a processor 210 for controlling image sensor 220 to capture image data and for analyzing the image data according to the disclosed embodiments. As discussed in further detail below with respect to FIG. 5A, processor 210 may include a “processing device” for performing logic operations on one or more inputs of image data and other data according to stored or accessible software instructions providing desired functionality. In some embodiments, processor 210 may also control feedback outputting unit 230 to provide feedback to user 100 including information based on the analyzed image data and the stored software instructions. As the term is used herein, a “processing device” may access memory where executable instructions are stored or, in some embodiments, a “processing device” itself may include executable instructions (e.g., stored in memory included in the processing device).

In some embodiments, the information or feedback information provided to user 100 may include time information. The time information may include any information related to a current time of day and, as described further below, may be presented in any sensory perceptive manner. In some embodiments, time information may include a current time of day in a preconfigured format (e.g., 2:30 pm or 14:30). Time information may include the time in the user's current time zone (e.g., based on a determined location of user 100), as well as an indication of the time zone and/or a time of day in another desired location. In some embodiments, time information may include a number of hours or minutes relative to one or more predetermined times of day. For example, in some embodiments, time information may include an indication that three hours and fifteen minutes remain until a particular hour (e.g., until 6:00 pm), or some other predetermined time. Time information may also include a duration of time passed since the beginning of a particular activity, such as the start of a meeting or the start of a jog, or any other activity. In some embodiments, the activity may be determined based on analyzed image data. In other embodiments, time information may also include additional information related to a current time and one or more other routine, periodic, or scheduled events. For example, time information may include an indication of the number of minutes remaining until the next scheduled event, as may be determined from a calendar function or other information retrieved from computing device 120 or server 250, as discussed in further detail below.

Feedback outputting unit 230 may include one or more feedback systems for providing the output of information to user 100. In the disclosed embodiments, the audible or visual feedback may be provided via any type of connected audible or visual system or both. Feedback of information according to the disclosed embodiments may include audible feedback to user 100 (e.g., using a Bluetooth™ or other wired or wirelessly connected speaker, or a bone conduction headphone). Feedback outputting unit 230 of some embodiments may additionally or alternatively produce a visible output of information to user 100, for example, as part of an augmented reality display projected onto a lens of glasses 130 or provided via a separate heads up display in communication with apparatus 110, such as a display 260 provided as part of computing device 120, which may include an onboard automobile heads up display, an augmented reality device, a virtual reality device, a smartphone, PC, table, etc.

The term “computing device” refers to a device including a processing unit and having computing capabilities. Some examples of computing device 120 include a PC, laptop, tablet, or other computing systems such as an on-board computing system of an automobile, for example, each configured to communicate directly with apparatus 110 or server 250 over network 240. Another example of computing device 120 includes a smartphone having a display 260. In some embodiments, computing device 120 may be a computing system configured particularly for apparatus 110, and may be provided integral to apparatus 110 or tethered thereto. Apparatus 110 can also connect to computing device 120 over network 240 via any known wireless standard (e.g., Wi-Fi, Bluetooth®, etc.), as well as near-field capacitive coupling, and other short range wireless techniques, or via a wired connection. In an embodiment in which computing device 120 is a smartphone, computing device 120 may have a dedicated application installed therein. For example, user 100 may view on display 260 data (e.g., images, video clips, extracted information, feedback information, etc.) that originate from or are triggered by apparatus 110. In addition, user 100 may select part of the data for storage in server 250.

Network 240 may be a shared, public, or private network, may encompass a wide area or local area, and may be implemented through any suitable combination of wired and/or wireless communication networks. Network 240 may further comprise an intranet or the Internet. In some embodiments, network 240 may include short range or near-field wireless communication systems for enabling communication between apparatus 110 and computing device 120 provided in close proximity to each other, such as on or near a user's person, for example. Apparatus 110 may establish a connection to network 240 autonomously, for example, using a wireless module (e.g., Wi-Fi, cellular). In some embodiments, apparatus 110 may use the wireless module when being connected to an external power source, to prolong battery life. Further, communication between apparatus 110 and server 250 may be accomplished through any suitable communication channels, such as, for example, a telephone network, an extranet, an intranet, the Internet, satellite communications, off-line communications, wireless communications, transponder communications, a local area network (LAN), a wide area network (WAN), and a virtual private network (VPN).

As shown in FIG. 2, apparatus 110 may transfer or receive data to/from server 250 via network 240. In the disclosed embodiments, the data being received from server 250 and/or computing device 120 may include numerous different types of information based on the analyzed image data, including information related to a commercial product, or a person's identity, an identified landmark, and any other information capable of being stored in or accessed by server 250. In some embodiments, data may be received and transferred via computing device 120. Server 250 and/or computing device 120 may retrieve information from different data sources (e.g., a user specific database or a user's social network account or other account, the Internet, and other managed or accessible databases) and provide information to apparatus 110 related to the analyzed image data and a recognized trigger according to the disclosed embodiments. In some embodiments, calendar-related information retrieved from the different data sources may be analyzed to provide certain time information or a time-based context for providing certain information based on the analyzed image data.

An example wearable apparatus 110 incorporated with glasses 130 according to some embodiments (as discussed in connection with FIG. 1A) is shown in greater detail in FIG. 3A. In some embodiments, apparatus 110 may be associated with a structure (not shown in FIG. 3A) that enables easy detaching and reattaching of apparatus 110 to glasses 130. In some embodiments, when apparatus 110 attaches to glasses 130, image sensor 220 acquires a set aiming direction without the need for directional calibration. The set aiming direction of image sensor 220 may substantially coincide with the field-of-view of user 100. For example, a camera associated with image sensor 220 may be installed within apparatus 110 in a predetermined angle in a position facing slightly downwards (e.g., 5-15 degrees from the horizon). Accordingly, the set aiming direction of image sensor 220 may substantially match the field-of-view of user 100.

FIG. 3B is an exploded view of the components of the embodiment discussed regarding FIG. 3A. Attaching apparatus 110 to glasses 130 may take place in the following way. Initially, a support 310 may be mounted on glasses 130 using a screw 320, in the side of support 310. Then, apparatus 110 may be clipped on support 310 such that it is aligned with the field-of-view of user 100. The term “support” includes any device or structure that enables detaching and reattaching of a device including a camera to a pair of glasses or to another object (e.g., a helmet). Support 310 may be made from plastic (e.g., polycarbonate), metal (e.g., aluminum), or a combination of plastic and metal (e.g., carbon fiber graphite). Support 310 may be mounted on any kind of glasses (e.g., eyeglasses, sunglasses, 3D glasses, safety glasses, etc.) using screws, bolts, snaps, or any fastening means used in the art.

In some embodiments, support 310 may include a quick release mechanism for disengaging and reengaging apparatus 110. For example, support 310 and apparatus 110 may include magnetic elements. As an alternative example, support 310 may include a male latch member and apparatus 110 may include a female receptacle. In other embodiments, support 310 can be an integral part of a pair of glasses, or sold separately and installed by an optometrist. For example, support 310 may be configured for mounting on the arms of glasses 130 near the frame front, but before the hinge. Alternatively, support 310 may be configured for mounting on the bridge of glasses 130.

In some embodiments, apparatus 110 may be provided as part of a glasses frame 130, with or without lenses. Additionally, in some embodiments, apparatus 110 may be configured to provide an augmented reality display projected onto a lens of glasses 130 (if provided), or alternatively, may include a display for projecting time information, for example, according to the disclosed embodiments. Apparatus 110 may include the additional display or alternatively, may be in communication with a separately provided display system that may or may not be attached to glasses 130.

In some embodiments, apparatus 110 may be implemented in a form other than wearable glasses, as described above with respect to FIGS. 1B-1D, for example. FIG. 4A is a schematic illustration of an example of an additional embodiment of apparatus 110 from a first viewpoint. The viewpoint shown in FIG. 4A is from the front of apparatus 110. Apparatus 110 includes an image sensor 220, a clip (not shown), a function button (not shown) and a hanging ring 410 for attaching apparatus 110 to, for example, necklace 140, as shown in FIG. 1B. When apparatus 110 hangs on necklace 140, the aiming direction of image sensor 220 may not fully coincide with the field-of-view of user 100, but the aiming direction would still correlate with the field-of-view of user 100.

FIG. 4B is a schematic illustration of the example of a second embodiment of apparatus 110, from a second viewpoint. The viewpoint shown in FIG. 4B is from a side orientation of apparatus 110. In addition to hanging ring 410, as shown in FIG. 4B, apparatus 110 may further include a clip 420. User 100 can use clip 420 to attach apparatus 110 to a shirt or belt 150, as illustrated in FIG. 1C. Clip 420 may provide an easy mechanism for disengaging and reengaging apparatus 110 from different articles of clothing. In other embodiments, apparatus 110 may include a female receptacle for connecting with a male latch of a car mount or universal stand.

In some embodiments, apparatus 110 includes a function button 430 for enabling user 100 to provide input to apparatus 110. Function button 430 may accept different types of tactile input (e.g., a tap, a click, a double-click, a long press, a right-to-left slide, a left-to-right slide). In some embodiments, each type of input may be associated with a different action. For example, a tap may be associated with the function of taking a picture, while a right-to-left slide may be associated with the function of recording a video.

The example embodiments discussed above with respect to FIGS. 3A, 3B, 4A, and 4B are not limiting. In some embodiments, apparatus 110 may be implemented in any suitable configuration for performing the disclosed methods. For example, referring back to FIG. 2, the disclosed embodiments may implement an apparatus 110 according to any configuration including an image sensor 220 and a processor unit 210 to perform image analysis and for communicating with a feedback unit 230.

FIG. 5A is a block diagram illustrating the components of apparatus 110 according to an example embodiment. As shown in FIG. 5A, and as similarly discussed above, apparatus 110 includes an image sensor 220, a memory 550, a processor 210, a feedback outputting unit 230, a wireless transceiver 530, and a mobile power source 520. In other embodiments, apparatus 110 may also include buttons, other sensors such as a microphone, and inertial measurements devices such as accelerometers, gyroscopes, magnetometers, temperature sensors, color sensors, light sensors, etc. Apparatus 110 may further include a data port 570 and a power connection 510 with suitable interfaces for connecting with an external power source or an external device (not shown).

Processor 210, depicted in FIG. 5A, may include any suitable processing device. The term “processing device” includes any physical device having an electric circuit that performs a logic operation on input or inputs. For example, processing device may include one or more integrated circuits, microchips, microcontrollers, microprocessors, all or part of a central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field-programmable gate array (FPGA), or other circuits suitable for executing instructions or performing logic operations. The instructions executed by the processing device may, for example, be pre-loaded into a memory integrated with or embedded into the processing device or may be stored in a separate memory (e.g., memory 550). Memory 550 may comprise a Random Access Memory (RAM), a Read-Only Memory (ROM), a hard disk, an optical disk, a magnetic medium, a flash memory, other permanent, fixed, or volatile memory, or any other mechanism capable of storing instructions.

Although, in the embodiment illustrated in FIG. 5A, apparatus 110 includes one processing device (e.g., processor 210), apparatus 110 may include more than one processing device. Each processing device may have a similar construction, or the processing devices may be of differing constructions that are electrically connected or disconnected from each other. For example, the processing devices may be separate circuits or integrated in a single circuit. When more than one processing device is used, the processing devices may be configured to operate independently or collaboratively. The processing devices may be coupled electrically, magnetically, optically, acoustically, mechanically or by other means that permit them to interact.

In some embodiments, processor 210 may process a plurality of images captured from the environment of user 100 to determine different parameters related to capturing subsequent images. For example, processor 210 can determine, based on information derived from captured image data, a value for at least one of the following: an image resolution, a compression ratio, a cropping parameter, frame rate, a focus point, an exposure time, an aperture size, and a light sensitivity. The determined value may be used in capturing at least one subsequent image. Additionally, processor 210 can detect images including at least one hand-related trigger in the environment of the user and perform an action and/or provide an output of information to a user via feedback outputting unit 230.

In another embodiment, processor 210 can change the aiming direction of image sensor 220. For example, when apparatus 110 is attached with clip 420, the aiming direction of image sensor 220 may not coincide with the field-of-view of user 100. Processor 210 may recognize certain situations from the analyzed image data and adjust the aiming direction of image sensor 220 to capture relevant image data. For example, in one embodiment, processor 210 may detect an interaction with another individual and sense that the individual is not fully in view, because image sensor 220 is tilted down. Responsive thereto, processor 210 may adjust the aiming direction of image sensor 220 to capture image data of the individual. Other scenarios are also contemplated where processor 210 may recognize the need to adjust an aiming direction of image sensor 220.

In some embodiments, processor 210 may communicate data to feedback-outputting unit 230, which may include any device configured to provide information to a user 100. Feedback outputting unit 230 may be provided as part of apparatus 110 (as shown) or may be provided external to apparatus 110 and communicatively coupled thereto. Feedback-outputting unit 230 may be configured to output visual or nonvisual feedback based on signals received from processor 210, such as when processor 210 recognizes a hand-related trigger in the analyzed image data.

The teen “feedback” refers to any output or information provided in response to processing at least one image in an environment. In some embodiments, as similarly described above, feedback may include an audible or visible indication of time information, detected text or numerals, the value of currency, a branded product, a person's identity, the identity of a landmark or other environmental situation or condition including the street names at an intersection or the color of a traffic light, etc., as well as other information associated with each of these. For example, in some embodiments, feedback may include additional information regarding the amount of currency still needed to complete a transaction, information regarding the identified person, historical information or times and prices of admission etc. of a detected landmark etc. In some embodiments, feedback may include an audible tone, a tactile response, and/or information previously recorded by user 100. Feedback-outputting unit 230 may comprise appropriate components for outputting acoustical and tactile feedback. For example, feedback-outputting unit 230 may comprise audio headphones, a hearing aid type device, a speaker, a bone conduction headphone, interfaces that provide tactile cues, vibrotactile stimulators, etc. In some embodiments, processor 210 may communicate signals with an external feedback outputting unit 230 via a wireless transceiver 530, a wired connection, or some other communication interface. In some embodiments, feedback outputting unit 230 may also include any suitable display device for visually displaying information to user 100.

As shown in FIG. 5A, apparatus 110 includes memory 550. Memory 550 may include one or more sets of instructions accessible to processor 210 to perform the disclosed methods, including instructions for recognizing a hand-related trigger in the image data. In some embodiments memory 550 may store image data (e.g., images, videos) captured from the environment of user 100. In addition, memory 550 may store information specific to user 100, such as image representations of known individuals, favorite products, personal items, and calendar or appointment information, etc. In some embodiments, processor 210 may determine, for example, which type of image data to store based on available storage space in memory 550. In another embodiment, processor 210 may extract information from the image data stored in memory 550.

As further shown in FIG. 5A, apparatus 110 includes mobile power source 520. The term “mobile power source” includes any device capable of providing electrical power, which can be easily carried by hand (e.g., mobile power source 520 may weigh less than a pound). The mobility of the power source enables user 100 to use apparatus 110 in a variety of situations. In some embodiments, mobile power source 520 may include one or more batteries (e.g., nickel-cadmium batteries, nickel-metal hydride batteries, and lithium-ion batteries) or any other type of electrical power supply. In other embodiments, mobile power source 520 may be rechargeable and contained within a casing that holds apparatus 110. In yet other embodiments, mobile power source 520 may include one or more energy harvesting devices for converting ambient energy into electrical energy (e.g., portable solar power units, human vibration units, etc.).

Mobile power source 510 may power one or more wireless transceivers (e.g., wireless transceiver 530 in FIG. 5A). The term “wireless transceiver” refers to any device configured to exchange transmissions over an air interface by use of radio frequency, infrared frequency, magnetic field, or electric field. Wireless transceiver 530 may use any known standard to transmit and/or receive data (e.g., Wi-Fi, Bluetooth®, Bluetooth Smart, 802.15.4, or ZigBee). In some embodiments, wireless transceiver 530 may transmit data (e.g., raw image data, processed image data, extracted information) from apparatus 110 to computing device 120 and/or server 250. Wireless transceiver 530 may also receive data from computing device 120 and/or server 250. In other embodiments, wireless transceiver 530 may transmit data and instructions to an external feedback outputting unit 230.

FIG. 5B is a block diagram illustrating the components of apparatus 110 according to another example embodiment. In some embodiments, apparatus 110 includes a first image sensor 220a, a second image sensor 220b, a memory 550, a first processor 210a, a second processor 210b, a feedback outputting unit 230, a wireless transceiver 530, a mobile power source 520, and a power connector 510. In the arrangement shown in FIG. 5B, each of the image sensors may provide images in a different image resolution, or face a different direction. Alternatively, each image sensor may be associated with a different camera (e.g., a wide angle camera, a narrow angle camera, an IR camera, etc.). In some embodiments, apparatus 110 can select which image sensor to use based on various factors. For example, processor 210a may determine, based on available storage space in memory 550, to capture subsequent images in a certain resolution.

Apparatus 110 may operate in a first processing-mode and in a second processing-mode, such that the first processing-mode may consume less power than the second processing-mode. For example, in the first processing-mode, apparatus 110 may capture images and process the captured images to make real-time decisions based on an identified hand-related trigger, for example. In the second processing-mode, apparatus 110 may extract information from stored images in memory 550 and delete images from memory 550. In some embodiments, mobile power source 520 may provide more than fifteen hours of processing in the first processing-mode and about three hours of processing in the second processing-mode. Accordingly, different processing-modes may allow mobile power source 520 to produce sufficient power for powering apparatus 110 for various time periods (e.g., more than two hours, more than four hours, more than ten hours, etc.).

In some embodiments, apparatus 110 may use first processor 210a in the first processing-mode when powered by mobile power source 520, and second processor 210b in the second processing-mode when powered by external power source 580 that is connectable via power connector 510. In other embodiments, apparatus 110 may determine, based on predefined conditions, which processors or which processing modes to use. Apparatus 110 may operate in the second processing-mode even when apparatus 110 is not powered by external power source 580. For example, apparatus 110 may determine that it should operate in the second processing-mode when apparatus 110 is not powered by external power source 580, if the available storage space in memory 550 for storing new image data is lower than a predefined threshold.

Although one wireless transceiver is depicted in FIG. 5B, apparatus 110 may include more than one wireless transceiver (e.g., two wireless transceivers). In an arrangement with more than one wireless transceiver, each of the wireless transceivers may use a different standard to transmit and/or receive data. In some embodiments, a first wireless transceiver may communicate with server 250 or computing device 120 using a cellular standard (e.g., LTE or GSM), and a second wireless transceiver may communicate with server 250 or computing device 120 using a short-range standard (e.g., Wi-Fi or Bluetooth®). In some embodiments, apparatus 110 may use the first wireless transceiver when the wearable apparatus is powered by a mobile power source included in the wearable apparatus, and use the second wireless transceiver when the wearable apparatus is powered by an external power source.

FIG. 5C is a block diagram illustrating the components of apparatus 110 according to another example embodiment including computing device 120. In this embodiment, apparatus 110 includes an image sensor 220, a memory 550a, a first processor 210, a feedback-outputting unit 230, a wireless transceiver 530a, a mobile power source 520, and a power connector 510. As further shown in FIG. 5C, computing device 120 includes a processor 540, a feedback-outputting unit 545, a memory 550b, a wireless transceiver 530b, and a display 260. One example of computing device 120 is a smartphone or tablet having a dedicated application installed therein. In other embodiments, computing device 120 may include any configuration such as an on-board automobile computing system, a PC, a laptop, and any other system consistent with the disclosed embodiments. In this example, user 100 may view feedback output in response to identification of a hand-related trigger on display 260. Additionally, user 100 may view other data (e.g., images, video clips, object information, schedule information, extracted information, etc.) on display 260. In addition, user 100 may communicate with server 250 via computing device 120.

In some embodiments, processor 210 and processor 540 are configured to extract information from captured image data. The term “extracting information” includes any process by which information associated with objects, individuals, locations, events, etc., is identified in the captured image data by any means known to those of ordinary skill in the art. In some embodiments, apparatus 110 may use the extracted information to send feedback or other real-time indications to feedback outputting unit 230 or to computing device 120. In some embodiments, processor 210 may identify in the image data the individual standing in front of user 100, and send computing device 120 the name of the individual and the last time user 100 met the individual. In another embodiment, processor 210 may identify in the image data, one or more visible triggers, including a hand-related trigger, and determine whether the trigger is associated with a person other than the user of the wearable apparatus to selectively determine whether to perform an action associated with the trigger. One such action may be to provide a feedback to user 100 via feedback-outputting unit 230 provided as part of (or in communication with) apparatus 110 or via a feedback unit 545 provided as part of computing device 120. For example, feedback-outputting unit 545 may be in communication with display 260 to cause the display 260 to visibly output information. In some embodiments, processor 210 may identify in the image data a hand-related trigger and send computing device 120 an indication of the trigger. Processor 540 may then process the received trigger information and provide an output via feedback outputting unit 545 or display 260 based on the hand-related trigger. In other embodiments, processor 540 may determine a hand-related trigger and provide suitable feedback similar to the above, based on image data received from apparatus 110. In some embodiments, processor 540 may provide instructions or other information, such as environmental information to apparatus 110 based on an identified hand-related trigger.

In some embodiments, processor 210 may identify other environmental information in the analyzed images, such as an individual standing in front user 100, and send computing device 120 information related to the analyzed information such as the name of the individual and the last time user 100 met the individual. In a different embodiment, processor 540 may extract statistical information from captured image data and forward the statistical information to server 250. For example, certain information regarding the types of items a user purchases, or the frequency a user patronizes a particular merchant, etc. may be determined by processor 540. Based on this information, server 250 may send computing device 120 coupons and discounts associated with the user's preferences.

When apparatus 110 is connected or wirelessly connected to computing device 120, apparatus 110 may transmit at least part of the image data stored in memory 550a for storage in memory 550b. In some embodiments, after computing device 120 confirms that transferring the part of image data was successful, processor 540 may delete the part of the image data. The term “delete” means that the image is marked as ‘deleted’ and other image data may be stored instead of it, but does not necessarily mean that the image data was physically removed from the memory.

As will be appreciated by a person skilled in the art having the benefit of this disclosure, numerous variations and/or modifications may be made to the disclosed embodiments. Not all components are essential for the operation of apparatus 110. Any component may be located in any appropriate apparatus and the components may be rearranged into a variety of configurations while providing the functionality of the disclosed embodiments. Therefore, the foregoing configurations are examples and, regardless of the configurations discussed above, apparatus 110 can capture, store, and process images.

Further, the foregoing and following description refers to storing and/or processing images or image data. In the embodiments disclosed herein, the stored and/or processed images or image data may comprise a representation of one or more images captured by image sensor 220. As the term is used herein, a “representation” of an image (or image data) may include an entire image or a portion of an image. A representation of an image (or image data) may have the same resolution or a lower resolution as the image (or image data), and/or a representation of an image (or image data) may be altered in some respect (e.g., be compressed, have a lower resolution, have one or more colors that are altered, etc.).

For example, apparatus 110 may capture an image and store a representation of the image that is compressed as a .JPG file. As another example, apparatus 110 may capture an image in color, but store a black-and-white representation of the color image. As yet another example, apparatus 110 may capture an image and store a different representation of the image (e.g., a portion of the image). For example, apparatus 110 may store a portion of an image that includes a face of a person who appears in the image, but that does not substantially include the environment surrounding the person. Similarly, apparatus 110 may, for example, store a portion of an image that includes a product that appears in the image, but does not substantially include the environment surrounding the product. As yet another example, apparatus 110 may store a representation of an image at a reduced resolution (i.e., at a resolution that is of a lower value than that of the captured image). Storing representations of images may allow apparatus 110 to save storage space in memory 550. Furthermore, processing representations of images may allow apparatus 110 to improve processing efficiency and/or help to preserve battery life.

In addition to the above, in some embodiments, any one of apparatus 110 or computing device 120, via processor 210 or 540, may further process the captured image data to provide additional functionality to recognize objects and/or gestures and/or other information in the captured image data. In some embodiments, actions may be taken based on the identified objects, gestures, or other information. In some embodiments, processor 210 or 540 may identify in the image data, one or more visible triggers, including a hand-related trigger, and determine whether the trigger is associated with a person other than the user to determine whether to perform an action associated with the trigger.

Wearable apparatus 110 may be configured to selectively process images captured by image sensors (e.g., image sensor 220, 220a, and/or 220b). In some embodiments, wearable apparatus 110 may be configured to distinguish between different types of image data (or images) captured from an environment of a user 100 through a wearable image sensor, such as image sensor 220, 220a, and/or 220b. For example, at least one processing device (e.g., processor 210, 201a, 210b, and/or 540) may be programmed to access at least one rule from a rule database for classifying images. The processing device may be programmed to distinguish the different types of image data (or images) by classifying, according to the at least one rule, at least a first subset of the images as key images, and at least a second subset of the images as auxiliary images. A key image may be an image that includes information that is important or has a particular significance to at least one purpose of operating wearable apparatus 110 and/or to a user of wearable apparatus 110. An auxiliary image may be an image that includes information that is less important to the at least one purpose of operating wearable apparatus 110 and/or to a user of wearable apparatus 110, as compared to a key image. Thus, the key images and auxiliary images may be defined based on predetermined rules.

User 100 of wearable apparatus 110, the manufacturer of wearable apparatus 110, and/or other third parties may define the rules and update the rules. The rules may be updated based on data received by wearable apparatus 110 from network 240 (e.g., transmitted by one or more of computing device 120 and server 250). For example, when at least one the purpose of operating wearable apparatus 110 is to capture images of persons in the field of view of wearable apparatus 110, a key image would be an image that includes one or more persons. An auxiliary image may be an image that does not include any people, such as an image of a shop on the street.

The purpose of operating wearable device 110 may be general or specific. For example, as discussed above, simply identifying a person in an image may constitute categorizing the image as a key image. In some embodiments, the purpose may be more specific. For example, in some embodiments, only images that include persons that are known to the user of wearable apparatus 110 may be classified as key images. Wearable apparatus 110 may determine that a person appearing in image data is known to the user by, for example, comparing facial features of a person in a captured image to a database storing images including faces of persons known to a user of wearable apparatus 110.

In some embodiments, the processing device may be programmed to delete at least some of the auxiliary images. Deleting some auxiliary images may save data storage space needed for the operation of the wearable apparatus 110, thereby reducing the cost of associated with operating wearable apparatus 110.

In some embodiments, the at least one rule may classify images that include an object in the environment of user 100 as key images, and classify images that do not include the object as auxiliary images. For example, the rule may classify images including a person as key images, and classify images that do not include the person as auxiliary images. In some embodiments, the rule may classify images including an object as auxiliary images, and classify images that do not include the object as key images. For example, the rule may classify images including a product advertisement as auxiliary images, and classify images that do not include the product advertisement as key images.

In some embodiments, the rule may classify images according to image quality level. For example, the rule may classify images having a quality level that is higher than or equal to a predetermined quality threshold as key images, and images having a quality level that is lower than the predetermined quality threshold as auxiliary images. The predetermined quality threshold may be determined based on at least one of a resolution of the image, a level of focus, the location of a predefined object within the image, etc. For example, an image may be classified as a key image when the resolution of the image is higher than or equal to 3.0 Megapixels (an example of the predetermined quality threshold). As another example, an image may be classified as a key image when a person appears in the image at a location that is within a predetermined distance from a center point of the image (or be classified as an auxiliary image when the person appears in the image at a location that is outside of the predetermined distance from the center point of the image).

In some embodiments, the rule may associate a first importance level to an image including one or more of a face, a product, and text. The first importance level may be higher than an importance level of an image that does not include a face, a product, or text. In some embodiments, the rule may associate a second importance level to an image including one or more of a predefined location, a predefined face of a specific individual, a predefined type of object, and a predefined text. The second importance level may be higher than the first importance level.

In some embodiments, the processing device may be programmed to process at least one key image to recognize image content (e.g., activities performed by persons, signage, and/or advertisement shown on buildings) within the key image. The processing device may be programmed to select, based on the recognized image content, one of a plurality of alternative actions associated with the key image, and may execute the selected action. In some embodiments, the plurality of alternative actions may include transmitting the key image to a computing device (such as computing device 120 and/or server 250), and transmitting information regarding the key image to the computing device. The information regarding the key image may include location information (e.g., where key image is captured), time information (e.g., when key image is captured), contextual information (e.g., what is happening in the key image), etc.

FIG. 6 is a block diagram illustrating a memory (e.g., memory 550, 550a, and/or 550b) according to the disclosed embodiments. The memory may include one or more modules or sets of instructions, which when executed by at least one processing device, carry out methods consistent with the disclosed embodiments. For example, the memory may include instructions executable by the at least one processing device to process or analyze images captured by the image sensors. In some embodiments, the processing device may be included in wearable apparatus 110. For example, the processing device may include processor 210, 210a, and/or 210b shown in FIGS. 5A and 5B. The processing device may process the image data captured by the image sensors in near real time, as the image data are being captured by the image sensors. In some embodiments, the processing device may include a processor that is separately located from wearable apparatus 110. The processing device may include a processor that is remotely connected with wearable apparatus 110 through network 240, which may be a wired or wireless network, or through any other connectivity means, such as Bluetooth, near field communication (NFC), etc. For example, the processing device may include processor 540 included in computing device 120, which may be connected with wearable apparatus 110 through a wired or wireless connection, such as through a cable, Bluetooth, WiFi, infrared, or near field communication (NFC). In some embodiments, the processing device may include a processor included in server 250, which may be wirelessly connected with wearable apparatus 110 through network 240. In some embodiments, the processing device may include a cloud computing processor remotely and wirelessly connected with wearable apparatus 110 through network 240. Wearable apparatus 110 may transmit captured image data to the processing device in near real time, and the processing device may process the captured image data and provide results of processing to wearable apparatus 110 in near real time. Further, in some embodiments, one or more databases and one more modules may be located remotely from wearable apparatus 110 (e.g., included in computing device 120 and/or server 250).

In the example shown in FIG. 6, memory 550 includes or stores an image database 601, an action database 602, and a rule database 603. Memory 550 may also include a database access module 604, an image classification module 605, an image processing module 606, and an action execution module 607. Additional or fewer databases and/or modules may be included in memory 550. The modules and databases shown in FIG. 6 are examples, and a processor in the disclosed embodiments may operate according to any suitable process.

In the embodiment shown in FIG. 6, memory 550 is configured to store an image database 601. Image database 601 may be configured to store various images, such as images (or image data) captured by an image sensor (e.g., image sensor 220, 220a, and/or 220b). Image database 601 may also be configured to store data other than image data, such as textual data, audio data, video data, etc. For example, image database 601 may be configured to store information related to the images, such as location information, date and time information, an identity of an object identified in the images, information associated with key images, and/or information associated with auxiliary images, etc.

In the example shown in FIG. 6, memory 550 is also configured to store an action database 602. Action database 602 may be configured to store predefined actions that may be taken by the processing device. For example, the predefined actions may be taken by the processing device in response to identifying an object from an image, classifying an image as a key image, recognizing image content within the key image, classifying an image as an auxiliary image, etc. Examples of the predefined actions may include transmitting key images to a computing device and transmitting information regarding the key images to the computing device. The predefined actions may also be referred to as alternative actions. In some embodiments, the predefined actions stored in action database 602 may also be updated, either periodically or dynamically. For example, as the environment of user 100 and/or the context associated with capturing image data change, as may be identified from the captured images, the predefined actions may be updated to reflect the changing environment and/or context associated with capturing image data.

Memory 550 is also configured to store a rule database 603. Rule database 603 may be configured to store one or more predefined rules that may be predefined by user 100 and/or the manufacturer of wearable apparatus 110. In some embodiments, rule database 603 may also be configured to store one or more dynamically updatable rules that may be updated after the initial rules are stored in rule database 603. For example, the dynamically updatable rules may be updated, e.g., by processing device, periodically, in near real time based on changing context identified in the captured images. For example, a rule stored in rule database 603 may classify a first image as a key image when the first image includes a particular object and classify a second image as an auxiliary image when the second image does not include the particular object. This rule may be updated as the environment of user 100 and/or the context associated with capturing image data change, as may be identified from the captured images. For example, the updated rule may classify a first image as an auxiliary image when the first image includes the particular object and classify a second image as a key image when the second image does not include the particular object.

As shown in FIG. 6, memory 550 is also configured to store a database access module 604. The processing device may execute instructions associated with database access module 604 to access image database 601, action database 602, and rule database 603, for example, to retrieve previously stored image data, predefined actions, and/or rules for performing analysis of the image data. The processing device may also execute instructions associated with database access module 604 to store image data in image database 601, actions in action database 601, and rules in rule database 603.

In the embodiment shown in FIG. 6, memory 550 is configured to store an image classification module 605. The processing device may execute instructions associated with image processing module 604 to perform various analyses and processes of image data captured by the image sensor to classify the captured images. For example, the processing device may execute instructions associated with image processing module 604 to read or retrieve one or more rules from rule database 603 (e.g., through database access module 604), and use the rules to classify the captured images. The processing device may classify the images into key images and auxiliary images based on the rules.

In the embodiment shown in FIG. 6, memory 550 is configured to store an image processing module 606. The processing device may execute instructions associated with image processing module 606 to perform various analyses and processes of image data captured by the image sensor. For example, the processing device may execute instructions associated with image processing module 606 to identify an object from an image, such as a key image and/or an auxiliary image. As another example, the processing device may execute instructions associated with image processing module 606 to recognize image content (e.g., activities performed by persons, signage and/or advertisement shown on buildings) within at least one key image.

In the embodiment shown in FIG. 6, memory 550 is configured to store an action execution module 607. The processing device may execute instructions associated with action execution module 607 to select alternative actions stored in action database 602, and execute the selected actions. The processing device may select the actions based on, for example, recognized image content from a key image.

FIG. 7 shows an example environment including wearable apparatus 110 for capturing and processing images, consistent with the disclosed embodiments. As shown, wearable apparatus 110 may be carried on necklace 140 worn by user 100. Wearable apparatus 110 may be worn by user 100 on any suitable part of user 100. For example, wearable apparatus 110 may be attached to a belt or shirt of user 100 using clip 420 shown in FIG. 4B. As another example, wearable apparatus 110 may be attached to an arm band or magnetic coupler secured to an arm of user 100. As a further example, wearable apparatus 110 may be attached to a helmet, cap, or hat worn by user 100. Wearable apparatus 110 may include image sensor 220, 220a, and/or 220b (as shown in FIGS. 5A and 5B), which has a field of view indicated by dashed lines 700 and 705. Image sensor 220, 220a, and/or 220b may capture one or more images of the scene or environment in front of user 100. In this example, user 100 may be walking or standing on a street facing a building 710. One or more images captured by image sensor 220, 220a, and/or 220b may include building 710. Building 710 may be a store, and may include a sign 720 with a name of the store, e.g., “Leather Store,” on the front side of the building 710 (hence building 710 may also be referred to as the leather store building 710).

One or more images captured by the image sensors of wearable apparatus 110 may include an advertisement 725 on the front wall of building 710, which may include a picture of a hand bag 730. The image of advertisement 725 may also include a logo 735 of text “CC” included within an oval. Logo 735 may be a brand logo of the hand bag. The image may also include text “Bag” shown in advertisement 725.

One or more images captured by the image sensors of wearable apparatus 110 may include an advertisement 745 on the front wall of building 710. The image may include a picture 755 of a belt 750 having a logo with text “CC,” which may be the brand of the belt. The image may also include text 760 “Waist Belt, Sale 20%” in advertisement 745.

One or more images captured by the image sensors of wearable apparatus 110 may include a person 765, who may carry a hand bag 770, which may include a logo 775 of text “V” included in an oval.

The processing device may analyze or process the captured plurality of images to classify the images into key images and auxiliary images. For example, the processing device may access at least one rule stored in rule database 603 for classifying images. The processing device may classify, according to the at least one rule, at least a first subset of the plurality of images as key images, and at least a second subset of the plurality of images as auxiliary images. For example, the rule may classify images that include person 765 as key images, and classify images that do not include person 765 as auxiliary images. In the example shown in FIG. 9, based on the rule, the processing device may classify a first set of images that include person 765 as key images (e.g., an image including person 765 alone, an image including person 765 and advertisement 725, an image including person 765, advertisement 725 and advertisement 745, etc.). The processing device may classify a second set of images that do not include person 765 as auxiliary images (e.g., an image including advertisement 745 alone, an image including the entire leather store only, etc.).

In some embodiments, the rule may state the opposite. For example, the rule may classify images including an object as auxiliary images, and images not including an object as key images. The object may be picture 755 of a waist belt. Based on this rule, the processing device may classify a first subset of images that include picture 755 of the waist belt as auxiliary images (e.g., an image including only picture 755 of the waist belt, an image including advertisement 745 showing picture 755 of the waist belt and a part of advertisement 725, an image of the entire leather store 710 including picture 755 of the waist belt, etc.). The processing device may classify a second subset of images not including picture 755 of the waist belt as key images (e.g., an image including advertisement 725 only, an image including person 765 only, an image including advertisement 725 and person 765, etc.).

The rule may classify images according to an image quality level. For example, the rule may classify images as key or auxiliary images based on a predetermined quality threshold, such as a predetermined resolution. In some embodiments, the rule may classify an image having a resolution of less than 3.0 Megapixels as an auxiliary image, and an image having a resolution greater than or equal to 3.0 Megapixels as a key image. In the example shown in FIG. 7, if an image of person 765 has a resolution of less than 3.0 Megapixels, the processing device may classify the image as an auxiliary image. If an image of person 765 has a resolution of greater than 3.0 Megapixels, the processing device may classify the image as a key image. The 3.0 Megapixels is only used as an example. Other resolutions (e.g., 1 Megapixels, 2 Megapixels, 4 Megapixels, 5 Megapixels, etc.) may be used as the quality threshold, and may be set based on implementations of wearable apparatus 110.

The rule may associate a first importance level to an image including one or more of a face, a product, or text. The first importance level may be represented by a number, such as, “1.0”, or may be represented by an alphabet, such as “A”. In some embodiments, the first importance level may be represented by a symbol, such as “*”. For example, wearable apparatus 110 may capture an image including person 765. The processing device may associate, based on the rule, the image including the face of person 765 with the first importance level, such as “1.0”. As another example, the processing device may associate an image including hand bag 770 (a product) with the first importance level. As a further example, the processing device may associate an image including “Bag” (text) with the first importance level.

The first importance level may be higher than an importance level of an image that does not include a face, a product, or text. For example, the processing device may associate, based on a rule, an importance level (e.g., represented by a number “0.5”) to an image of the roof of building 710, which does not include a face, a product, or text. The first importance level, represented by the number “1.0”, is higher than the importance level represented by the number “0.5.”

The rule may associate a second importance level, which may be represented by any of the means discussed above in connection with the first importance level, to an image including one or more of a predefined location, a predefined face of a specific individual, a predefined type of object, and a predefined text. For example, the rule may associate a second importance level represented by a number “2.0” to an image including a restaurant, including the face of person 765, including a hand bag with logo 735 or “CC” brand, or including text “Bag,” or including a combination thereof. The second importance level “2.0” is higher than the first importance level “1.0”.

The processing device may process or analyze at least one key image to recognize image content within the at least one key image. For example, the process device may classify an image including person 765 and advertisement 725 as a key image. The processing device may analyze this image to recognize that the hand bag 770 carried by person 765 has a logo 775 that is different from logo 735 of hand bag 730 shown in advertisement 725. Based on the recognized image content, the processing device may select one of a plurality of alternative actions associated with the key image. For example, the alternative actions associated with the key image may include transmitting the key image to a computing device (e.g., computing device 120 and/or server 250), and transmitting information regarding the key image to the computing device (such as location information regarding where the key image is captured, time information regarding when the key image is captured, etc.). The processing device may execute the selected action. For example, the processing device may select an action of transmitting the key image to the computing device 120 from a plurality of actions stored in action database 602. The processing device may transmit the key image to the computing device 120, which may also be carried by user 100. The processing device may also transmit information regarding the key image to the computing device. The information regarding the key image may include, for example, that the brand of hand bag carried by person 765 is different from the brand of hand bag shown in advertisement 725, that person 765 probably likes hand bags and she may like the hand bag shown in advertisement 725, and that the “CC” brand of hand bag shown in advertisement 725 is a better brand than the “V” brand person 765 is carrying. Computing device 120 may display the key image including person 765 along with the information regarding the key image.

FIG. 8 shows an example database table for storing information associated with key images. Database table 800 may be stored in memory 550, memory 550a, memory 550b, and storage devices included in server 250. For example, database table 800 may be stored in image database 601. Database table 800 may include a plurality of rows and columns. The header row showing “Identifier,” “Object,” “Location,” “Date,” and “Time,” may or may not be part of the actual database table 800. FIG. 8 shows 50 example rows for storing information and data under the categories of “Identifier,” “Object,” “Location,” “Date,” and “Time.” Three example rows are referenced as 801, 802, and 850. Each row from 801 to 850 may store information regarding captured images, such as key images and/or auxiliary images. The information includes an identity or identifier of an object stored in column 861, a description of an object identified from captured images (e.g., key images and/or auxiliary images) stored in column 862, a location where the image (e.g., key image and/or auxiliary image) was captured, as stored in column 863, a data when the image key image and/or auxiliary image) was captured, as stored in column 864, and a time of day when the image (e.g., key image and/or auxiliary image) was captured, as stored in column 865.

As shown in column 861, each object identified from the captured images (e.g., key images and/or auxiliary images) may be associated with a unique identifier stored in database table 800. The identifier may include a number uniquely assigned to the object in database table 1100. In some embodiments, the identifier may also include an alphabet (e.g., “ABC,” “BCD,” etc.). In some embodiments, the identifier may include a symbol (e.g., “#,” “$,” etc.). In some embodiment, the identifier may include any combination of a number, an alphabet, and a symbol. The processing device (e.g., processor 210 and/or processor 540) may read or retrieve data related to the occurrence of a product descriptor from database table 1100 by pointing or referring to an identifier.

Three example database rows are shown in FIG. 8 for three objects identified from the captured images. The first object is the bag advertisement 725 (e.g., advertisement 725) shown in FIG. 7, which may be associated with an identifier “1001.” The location where the bag advertisement 725 was captured may be “15 K Street, Washington, D.C.” The date and time where the bag advertisement 725 was captured may be “6/7/2015,” and “3:00 p.m.”

Referring to the example database table 800 shown in FIG. 8, the second object is the logo 735 of “CC” as shown in FIG. 7, which may be associated with an identifier of “1002.” The location where the logo 735 of “CC” was captured may be “15 K Street, Washington, D.C.” The date and time where the logo 735 was captured may be “6/7/2015,” and “3:00 p.m.”

Referring to the example database table 800 shown in FIG. 8, the third object shown in database table 800 is the logo 775 of “V,” as shown in FIG. 7. The third object may be associated with an identifier “1050,” which indicates that the object of logo 775 may be the fifth entry in database table 1100. The location where the logo 775 was captured may be a GPS location of “GPS 38.9047° N, 77.0164° W.” The date and time where the logo 775 was captured may be “6/15/2015,” and “1:00 p.m.”

The database table 800 may store other information and data. For example, database table 800 may store a predefined location, a predefined face of a specific individual, a predefined type of object, a predefined text, etc. The database table 800 may also store the importance level (e.g., the first importance level, the second importance level) associated with the images (e.g., the key images and/or the auxiliary images).

FIG. 9 is a flowchart illustrating an example method 900 for selectively processing images, consistent with the disclosed embodiments. Method 900 may be performed by various devices included in wearable apparatus 110, such as, image sensor 220, 220a, and/or 220b and a processing device (e.g., processor 210 and/or processor 540).

Method 900 may include capturing a plurality of images from an environment of user 100 (step 905). For example, image sensor 220, 220a, and/or 220b may capture a plurality of images of the environment of user 100, such as an image of various objects shown in FIG. 7, such as an image of building 710, an image of advertisement 725, an image of advertisement 745, an image of person 765, an image of hand bag 770, etc.

Method 900 may also include accessing at least one rule for classifying images (step 910). For example, the processing device may access rule database 603 to read or retrieve a rule for classifying images. Method 900 may also include classifying, according to the at least one rule, at least a first subset of the images as key images, and at least a second subset of images as auxiliary images (step 915). Examples of classification images into key images and auxiliary images by the processing device are discussed above in connection with FIG. 7. Method 900 may further include deleting at least some of the auxiliary images (step 920). For example, the processing device may delete at least some of the auxiliary images from image database 601. Deleting auxiliary images may save data storage space needed for the operation of wearable apparatus, thereby reducing cost.

FIG. 10 is a flowchart illustrating an example method 1000 for selectively processing images, consistent with the disclosed embodiments. Method 900 may be performed by various devices included in wearable apparatus 110, such as, image sensor 220, 220a, and/or 220b and a processing device (e.g., processor 210 and/or processor 540). Steps included in method 1000 may be combined with the steps of method 900. For example, method 900 may include steps of method 1000, and method 1000 may include steps of method 900.

Method 1000 may include identifying at least one key image (step 1005). In some embodiments, identifying the at least one key image may be performed by the processing device after step 915 has been performed, e.g., after classifying the images into key images and auxiliary images has been performed. In some embodiments, the processing device may access image database 601 to read or retrieve one or more key images previously classified and stored in image database 601, and identify at least one key image from the plurality of key images.

Method 1000 may include processing the at least one key image to recognize image content within the at least one key image (step 1010). For example, the processing device may process the at least one identified key image to recognize the image content of the key image (e.g., objects and context information included in the key image). In the example shown in FIG. 7, the processing device may analyze a key image including person 765 to recognize the image content. The image content may be, e.g., person 765 carrying a hand bag 770, who appears to be going to the leather store building 710 to do some shopping, or who appears to be waiting to meet someone, etc.

Method 1000 may also include selecting, based on the recognized image content, one of a plurality of alternative actions associated with the key images (step 1015). The alternative actions may include transmitting the at least one key image to a computing device (e.g., computing device 120 and/or server 250), and transmitting information regarding the at least one key image to the computing device. For example, based on the recognized image content, the processing device may select an action of transmitting the key image including person 765 carrying hand bag 770 to computing device 120.

Method 1000 may further include executing the selected action (step 1020). For example, the processing device may transmit the key image to computing device, and cause the key image to be displayed to user 100, who may be carrying computing device 120.

Wearable apparatus 110 may capture repetitive images, and may perform methods to discard and/or ignore at least some of the repetitive images. The devices and methods discussed under this heading may be combined with the any device and/or method discussed above and below.

In some embodiments, image sensors included in wearable apparatus 110 may capture repetitive images. Two images may be considered repetitive when the images show the same, similar, or overlapping image content. For example, two images may be considered repetitive when they show image content that is substantially the same, or when they capture substantially the same portion of the environment of user 100, or when they capture substantially the same objects, etc. Repetitive images of the environment of user 100 may not all be necessary for obtaining information regarding the environment of user 100. Some of the repetitive images may be deleted, discarded, or ignored during image processing. This may save processing time and increase the image processing speed of wearable apparatus 110.

In some embodiments, the processing device may be programmed to identify at least two or more of a plurality of images as repetitive images. At least one rule stored in rule database 603 may classify at least one of the repetitive images as a key image, and at least one of the repetitive images as an auxiliary image. The key image may be used to generate an image log. The image log may record identifiers of a plurality of key images, locations where the key images are captured, date and time the key images are captured, and descriptions of image content of the key images, etc. Some of the auxiliary images may be deleted.

A first image may also be considered as repetitive of a second image when the first image captures substantially the same objects as the second image, but a predetermined contextual situation associated with the second image no longer exists when the first image was captured. Predetermined contextual situations may include at least one of the following: meeting with an individual, visiting a location, and interacting with an object, entering a car, participating in a sport activity, and eating a meal. Other contextual situations may also be defined. The non-existence of such a predetermined contextual situation may cause an image to become a repetitive image. For example, in the example shown in FIG. 7, user 100 may be meeting with person 765. A first image may include person 765 carrying hand bag 770 and advertisement 725 showing an image of hand bag 730. In the first image, the hand bag 730 may provide useful information to user 100. For example, hand bag 730 may have a similar design as hand bag 770 person 765 is carrying, and the processing device may inform user 100 that person 765 may also like hand bag 730. When a second image is captured, person 765 may have left the scene, and second image includes only advertisement 725 showing hand bag 730. The second image may no longer be useful to user 100. Thus, due to the non-existence of the predetermined contextual situation, second image may become a repetitive image of first image. Wearable apparatus 110 may provide a user interface to allow user 100 to define a plurality of predetermined contextual situations.

FIG. 11 is a block diagram illustrating a memory (e.g., memory 550, 550a, and/or 550b) according to the disclosed embodiments. Memory 550 may include databases and/or modules that are similar to those shown in FIG. 6. Thus, the descriptions of the same databases and/or modules are not repeated. Databases and/or modules shown in FIG. 11 may be combined with databases and/or modules shown in FIG. 6, or used as alternatives to the databases and/or modules shown in FIG. 6. Databases and/or modules shown in FIG. 6 may also be included in FIG. 11, or used as alternatives to the databases and/or modules shown in FIG. 11.

In the embodiment shown in FIG. 11, memory 550 is configured to store a contextual situation database 1105. Contextual situation database 1105 may be configured to store predetermined contextual situations, such as those discussed above. Memory 550 is also configured to store a repetitive image determination module 1110. The processing device may execute instructions associated with repetitive image determination module 1110 to determine whether two or more images are repetitive. In some embodiments, the processing device may compare image content of the two or more images. For example, the processing device may compare the environment captured in the images, the objects identified from the images, the date and time the images are captured, the contextual situations associated with the images. When the images show the same environment, same objects, similar objects, overlapping environment and/or objects, same environment and similar objects but non-existence of a predetermined contextual situation, the processing device may determine that the two or more images are repetitive.

In the embodiment shown in FIG. 11, memory 550 is configured to store an image log generation module 1115. The processing device may execute instructions associated with image log generation module 1115 to generate an image log. For example, the processing device may use identified key images to generate an image log, which may show information such as the identifiers of the images, date and/or time the images are captured, brief descriptions of the image content of the images, etc. The image log may be stored in image database 601.

In the embodiment shown in FIG. 11, memory 550 is configured to store an image deletion module 1120. The processing device may execute instructions associated with image deletion module 1120 to delete an image. For example, the processing device may classify, based on a rule, some repetitive images as key images, and some repetitive images as auxiliary images. The processing device may delete at least some of the auxiliary images from image database 601.

FIG. 12 is a flowchart illustrating an example method 1200 for selectively processing images, consistent with the disclosed embodiments. Method 1200 may be performed by various devices included in wearable apparatus 110, such as, image sensor 220, 220a, and/or 220b and a processing device (e.g., processor 210 and/or processor 540). Steps included in method 1200 may be combined with the steps of method 900 and/or method 1000. For example, method 900 and/or method 1000 may include steps of method 1200, and method 1200 may include steps of method 900 and/or method 1000.

Method 1200 may include identifying at least one key image (step 1205). In some embodiments, identifying the at least one key image may be performed by the processing device after step 915 has been performed, e.g., after classifying the images into key images and auxiliary images has been performed. In some embodiments, the processing device may access image database 601 to read or retrieve one or more key images previously classified and stored in image database 601, and identify at least one key image from the plurality of key images.

Method 1200 may include identifying a predetermined contextual situation in the at least one key image (step 1210). For example, the processing device may process the at least one identified key image to identify a predetermined contextual situation exists in the key image. The processing device may first determine a contextual situation from the key image, and then compare the determined contextual situation with the predetermined contextual situations stored in the contextual situation database 1105, such as meeting with an individual, visiting a location, and interacting with an object, entering a car, participating in a sport activity, and eating a meal. When a match is found, the processing identifies that the predetermined contextual situation exists in the key image. In the example shown in FIG. 7, the predetermined contextual situation may be meeting with an individual. For example, user 100 may be meeting with person 765.

Method 1200 may include storing, in a memory (e.g., memory 550, such as image database 601), the at least one key image associated with the predetermined contextual situation (step 1215). For example, the processing device may store the at least one key image into image database 601. In the example shown in FIG. 7, the predetermined contextual situation may be meeting with an individual (e.g., user 100 may be meeting with person 765). An image having advertisement 725 including the hand bag 730 may be identified as a key image because it provide useful information regarding the hand bag 730, i.e., it is something person 765 may like. The processing device may store the key image associated with user 100 meeting with person 765 into image database 601.

Method 1200 may also include identifying that the predetermined contextual situation no longer exists in the environment of user 100 (step 1220). For example, the processing device may identify from an image that person 765 has left. Based on this identification, the processing device may identify that the predetermined contextual situation (e.g., meeting with individual) no longer exists because user 100 is no longer meeting with person 765. After identifying that the predetermined contextual situation no longer exists, the method may further include suspending storage in the memory of the key image that is not associated with the predetermined contextual situation (step 1225). For example, after person 765 has left from the environment of user 100, user 100 is no longer meeting with person 765. The key image including advertisement 725 that shows hand bag 730 may no longer provide useful information to user 100. The processing device may suspend storing the key image in image database 601.

Wearable apparatus 110 may have a privacy mode, under which wearable apparatus 110 may stop or suspend capturing images of the environment of user 100, or stop or suspend storing captured images. The devices and methods discussed under this heading may be combined with the any device and/or method discussed above and below.

In some embodiments, the processing device may be programmed to identify in at least one of the key images a visual trigger associated with a private contextual situation, and suspend storage of images associated with the private contextual situation. The private contextual situation may be a situation where the privacy of a person or a plurality of persons is of concern, which may make it inappropriate for wearable apparatus 110 to capture images including the person or persons. In some embodiments, the processing device may suspend or stop capturing images by the image sensors (e.g., image sensor 220, 220a, 220b) after identifying the visual trigger associated with the private contextual situation. The visual trigger may include a predefined hand gesture, a restroom sign, a toilet, nudity, and/or a face of an individual. For example, the predefined hand gesture may include a hand gesture from a person (e.g., person 765) suggesting user 100 to stop capturing image of the person. In some embodiments, being near or in the restroom or toilet may be a private contextual situation. In some embodiments, being faced with nudity of a person or a part of a person's body may be a private contextual situation. In some embodiments, the processing device may resume storage of key images when the private contextual situation no longer exists.

FIG. 13 is a block diagram illustrating a memory (e.g., memory 550, 550a, and/or 550b) according to the disclosed embodiments. Memory 550 may include databases and/or modules that are similar to those shown in FIG. 6 and FIG. 11. Thus, the descriptions of the same databases and/or modules are not repeated. Databases and/or modules shown in FIG. 13 may be combined with databases and/or modules shown in FIG. 6 and/or FIG. 11, or used as alternatives of the databases and/or modules shown in FIG. 6 and/or FIG. 11. Databases and/or modules shown in FIG. 6 and/or FIG. 11 may also be included in FIG. 13, or used as alternatives to the databases and/or modules shown in FIG. 13.

In the embodiment shown in FIG. 11, memory 550 is configured to store a visual trigger database 1300, which may be configured to store predetermined visual triggers, as discussed above. The processing device may access visual trigger database 1300 to read or retrieve one or more visual triggers, and compare captured images with the visual triggers to determine whether the captures images include one or more of the visual triggers. Once a visual trigger associated with a private contextual situation is identified from the images, the processing device may suspend storage of captured images, or suspend capturing of images of the environment including the visual trigger. When the private contextual situation no longer exists, the processing device may resume storage of the images (e.g., key images), or resume capturing of the image (e.g., key images).

FIG. 14 shows an example environment including wearable apparatus 110 for capturing and processing images, consistent with the disclosed embodiments. The environment of user 100 may include a restroom or toilet 1410. The toilet 1410 may include a sign 1420 indicating that it is a toilet (or toilet sign 1420). After identifying the toilet sign 1420 from an image captured by wearable apparatus 110, the processing device may compare the toilet sign 1420 with the visual triggers stored in visual trigger database 1300. The processing device may determine that the camera of wearable apparatus 110 is capturing an image of a toilet or restroom, which indicates that there is a private contextual situation in front of user 100 and the wearable apparatus 110. The processing device may stop and/or suspend capturing images of the environment of user 100 including the toilet 1410, or stop and/or suspend storing images captured by the image sensors in image database 601. When user 100 walks away from toilet 1410 such that the images captured by wearable apparatus 110 no longer includes toilet 1410, the processing device may resume capturing images of the environment of user 100, and/or resume storage of images (e.g., key images) of the environment of user 100.

FIG. 15 is a flowchart illustrating an example method 1500 for selectively processing images, consistent with the disclosed embodiments. Method 1500 may be performed by various devices included in wearable apparatus 110, such as, image sensor 220, 220a, and/or 220b and a processing device (e.g., processor 210 and/or processor 540). Steps included in method 1500 may be combined with the steps of methods 900, 1000, and 1200. For example, methods 900, 1000, and 1200 may each include steps of method 1500, and method 1500 may include steps of methods 900, 1000, and 1200.

Method 1500 may include capturing a plurality of images from an environment of user 100 (step 1505). For example, image sensors included in wearable apparatus 110 may capture a plurality of images from the environment of user 100 shown in FIG. 7 or FIG. 14. Method 1500 may also include accessing at least one rule for classifying images (step 1510). For example, the processing device may access rule database 603 to read or retrieve at least one rule for classifying images as key images and auxiliary images. Method 1500 may also include classifying, according to the at least one rule, a plurality of images as key images (step 1515). For example, a rule may classify an image including a face of a person, a product, or text as a key image. The processing device may classify a plurality of images including person 765, hand bag 735, and text “Bag,” as captured from the environment shown in FIG. 7, as key images. Method 1500 may further include identifying, in at least one of the key images, a visual trigger associated with a private contextual situation (step 1520). For example, the processing device may identify from a key image including person 765 a hand gesture suggesting user 100 to stop capturing images of person 765. The hand gesture suggesting user 100 to stop capturing images may be associated with a private contextual situation. For example, person 765 may want privacy, and may not wish to be captured in any image. As another example, the processing device may identify that there is a private contextual situation in the environment of user 100, such as a toilet or restroom, as shown in FIG. 14. Method 1500 may further include deleting the at least one of the key images that includes the visual trigger associated with the private contextual situation (step 1525).

The foregoing description has been presented for purposes of illustration. It is not exhaustive and is not limited to the precise forms or embodiments disclosed. Modifications and adaptations will be apparent to those skilled in the art from consideration of the specification and practice of the disclosed embodiments. Additionally, although aspects of the disclosed embodiments are described as being stored in memory, one skilled in the art will appreciate that these aspects can also be stored on other types of computer readable media, such as secondary storage devices, for example, hard disks or CD ROM, or other forms of RAM or ROM, USB media, DVD, Blu-ray, or other optical drive media.

Computer programs based on the written description and disclosed methods are within the skill of an experienced developer. The various programs or program modules can be created using any of the techniques known to one skilled in the art or can be designed in connection with existing software. For example, program sections or program modules can be designed in or by means of .Net Framework, .Net Compact Framework (and related languages, such as Visual Basic, C, etc.), Java, C++, Objective-C, HTML, HTML/AJAX combinations, XML, or HTML with included Java applets.

Moreover, while illustrative embodiments have been described herein, the scope of any and all embodiments having equivalent elements, modifications, omissions, combinations (e.g., of aspects across various embodiments), adaptations and/or alterations as would be appreciated by those skilled in the art based on the present disclosure. The limitations in the claims are to be interpreted broadly based on the language employed in the claims and not limited to examples described in the present specification or during the prosecution of the application. The examples are to be construed as non-exclusive. Furthermore, the steps of the disclosed methods may be modified in any manner, including by reordering steps and/or inserting or deleting steps. It is intended, therefore, that the specification and examples be considered as illustrative only, with a true scope and spirit being indicated by the following claims and their full scope of equivalents.

Claims

1. A wearable apparatus for selectively processing images, the wearable apparatus comprising:

an image sensor configured to capture a plurality of images from an environment of a user; and
at least one processing device programmed to: access at least one rule for classifying images; classify, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images; and delete at least some of the auxiliary images.

2. The wearable apparatus of claim 1, wherein the at least one rule classifies images that include an object in the environment of the user as key images and classifies images that do not include the object as auxiliary images.

3. The wearable apparatus of claim 1, wherein the at least one rule classifies images that include an object in the environment of the user as auxiliary images and classifies images that do not include the object as key images.

4. The wearable apparatus of claim 1, wherein the at least one rule classifies images according to an image quality level.

5. The wearable apparatus of claim 1, wherein the at least one rule associates a first importance level to an image including one or more of a face, a product, and text.

6. The wearable apparatus of claim 5, wherein the first importance level is higher than an importance level of an image that does not include a face, a product, or text.

7. The wearable apparatus of claim 5, wherein the at least one rule associates a second importance level to an image including one or more of a predefined location, a predefined face of a specific individual, a predefined type of object, and a predefined text.

8. The wearable apparatus of claim 7, wherein the second importance level is higher than the first importance level.

9. The wearable apparatus of claim 1, wherein the at least one processing device is further programmed to store information associated with at least one of the key images.

10. The wearable apparatus of claim 9, wherein the information stored in association with the at least one key image includes an identity of an object identified in the at least one key image.

11. The wearable apparatus of claim 9, wherein the information stored in association with the at least one key image includes one or more of: a location of where the key image was taken, a date when the key image was taken, and a time of day when the key image was taken.

12. The wearable apparatus of claim 1, wherein the at least one processing device is further programmed to:

process at least one key image to recognize image content within the at least one key image;
select, based on the recognized image content, one of a plurality of alternative actions associated with the key image; and
execute the selected action.

13. The wearable apparatus of claim 12, wherein the plurality of alternative actions include transmitting the at least one key image to a computing device and transmitting information about the at least one key image to the computing device.

14. The wearable apparatus of claim 1, wherein the at least one processing device is further programmed to determine that two or more of the key images include the same object and delete at least one of the two or more key images that include the same object.

15. The wearable apparatus of claim 1, wherein the at least one processing device is further programmed to identify at least two or more of the plurality of images as repetitive images, and wherein the at least one rule for classifying images is to classify at least one of the repetitive images as a key image, and at least one of the repetitive images as an auxiliary image.

16. The wearable apparatus of claim 1, wherein at least one of the key images is used to generate an image log.

17. The wearable apparatus of claim 1, further comprising a memory, and wherein the at least one processing device is further programmed to:

identify a predetermined contextual situation in at least one key image;
store, in the memory, the at least one key image associated with the predetermined contextual situation; and
after identifying that the predetermined contextual situation no longer exists in the environment of the user, suspend storage in the memory of key images that are not associated with the predetermined contextual situation.

18. The wearable apparatus of claim 17, wherein the predetermined contextual situation includes at least one of the following: meeting with an individual, visiting a location, interacting with an object, entering a car, participating in a sport activity, and eating a meal.

19. The wearable apparatus of claim 17, wherein the at least one processing device is further programmed to identify in at least one of the key images a visual trigger associated with a private contextual situation, and suspend storage of images associated with the private contextual situation.

20. The wearable apparatus of claim 19, wherein the visual trigger includes a predefined hand gesture, a restroom sign, a toilet, nudity, and/or a face of an individual.

21. The wearable apparatus of claim 19, wherein the at least one processing device is further programmed to resume storage of key images when the private contextual situation no longer exists.

22. A wearable apparatus for selectively processing images, the wearable apparatus comprising:

an image sensor configured to capture a plurality of images from an environment of a user; and
at least one processing device programmed to: access at least one rule for classifying images; classify, according to the at least one rule, a plurality of images as key images; identify, in at least one of the key images, a visual trigger associated with a private contextual situation; and delete the at least one of the key images that includes the visual trigger associated with the private contextual situation.

23. The wearable apparatus of claim 22, wherein the visual trigger includes a predefined hand gesture, a restroom sign, a toilet, nudity, and/or a face of an individual.

24. The wearable apparatus of claim 22, wherein the at least one processing device is further programmed to identify that the private contextual situation no longer exists, and to store key images, in a memory, after identification that the private contextual situation no longer exists.

25. The wearable apparatus of claim 24, wherein identifying that the private contextual situation no longer exists includes identifying a predefined hand gesture.

26. A method for selectively processing images, the method comprising:

processing a plurality of images captured by at least one image sensor included in a wearable apparatus;
accessing at least one rule for classifying images;
classifying, according to the at least one rule, at least a first subset of the plurality of images as key images and at least a second subset of the plurality images as auxiliary images; and
deleting at least some of the auxiliary images.

27. The method of claim 26, wherein the at least one rule associates an image quality level indicator with an image based on predetermined criteria.

28. The method of claim 26, wherein the at least one rule associates a first importance level to an image including at least one of a face, a product, and text.

29. The method of claim 28, wherein the first importance level is higher than an importance level of an image that does not include a face, a product, and/or text.

30. The method of claim 28, wherein the at least one rule associates a second importance level to an image including at least one of a predefined location, a predefined face of a specific individual, a predefined type of object, and a predefined text, and wherein the second importance level is higher than the first importance level.

31. A software product stored on a non-transitory computer readable medium and comprising data and computer implementable instructions for carrying out the method of claim 26.

Patent History
Publication number: 20160026870
Type: Application
Filed: Jul 23, 2015
Publication Date: Jan 28, 2016
Applicant: ORCAM TECHNOLOGIES LTD. (Jerusalem)
Inventors: Yonatan Wexler (Jerusalem), Amnon Shashua (Mevaseret Zion)
Application Number: 14/807,038
Classifications
International Classification: G06K 9/00 (20060101); H04N 5/225 (20060101); H04N 7/18 (20060101); H04N 5/232 (20060101);