Camera Interface in a Portable Handheld Electronic Device

In accordance with some embodiments, a method is performed at a handheld electronic device having a built-in digital camera and a touch sensitive screen. The method includes detecting a multi-finger gesture on the touch sensitive screen, wherein the touch sensitive screen is serving as part of an electronic viewfinder of the camera; storing coordinates of a location corresponding to the detected multi-finger gesture; translating the stored coordinates to a selected area of an image that is captured by the camera and that is being displayed on the touch sensitive screen; contracting or expanding the selected area in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively, while the detected multi-finger gesture remains in contact with the touch sensitive screen; and applying an automatic image capture parameter adjustment process that gives priority to the selected area.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
RELATED APPLICATION

This application claims priority to U.S. Provisional Patent App. No. 61/083,455, “Camera Interface in a Portable Handheld Electronic Device,” filed Jul. 24, 2008, which is incorporated by reference herein in its entirety.

TECHNICAL FIELD

The disclosed embodiments relate generally to portable handheld electronic devices, such as cellular telephone handsets and digital cameras, and more particularly to a user interface having a touch sensitive screen for controlling camera functions.

BACKGROUND

Portable handheld electronic devices, such as the IPHONE multifunction device by Apple Inc., have a built-in digital camera, in addition to other functions such as cellular telephony and digital audio and video file playback. The IPHONE device, in particular, has a touch sensitive screen as part of its user interface. The touch screen lets the user select a particular application program to be run, by performing a single finger gesture on the touch sensitive screen. For example, the user can point to (touch) the icon of a particular application, which results in the application being automatically launched in the device. The camera application, in particular, allows the user to navigate amongst previously stored pictures taken using the camera directly on the touch screen. In addition, there is a shutter button icon that can be touched by the user to release the shutter and thereby take a picture of the scene that is before the camera. Other uses of the touch sensitive screen include navigating around a Web page that is being displayed by single finger gestures, and zooming into a displayed Web page by performing a so called multi-finger spread gesture on the touch sensitive screen. The user can also zoom out of the Web page, by performing a multi-finger pinch gesture.

SUMMARY

Several methods for operating a built-in digital camera of a portable, handheld electronic device are described. In some embodiments, a method is performed at a handheld electronic device having a built-in digital camera and a touch sensitive screen. The method includes detecting a multi-finger gesture on the touch sensitive screen, wherein the touch sensitive screen is serving as part of an electronic viewfinder of the camera; storing coordinates of a location corresponding to the detected multi-finger gesture; translating the stored coordinates to a selected area of an image that is captured by the camera and that is being displayed on the touch sensitive screen; contracting or expanding the selected area in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively, while the detected multi-finger gesture remains in contact with the touch sensitive screen; and applying an automatic image capture parameter adjustment process that gives priority to the selected area. This gives the user finer control of auto focus, auto exposure, and auto while balance (“3A”) adjustments in the camera.

In some embodiments, a handheld electronic device is provided which comprises a touch sensitive screen, a detector configured to detect a multi-finger gesture on the touch sensitive screen and store coordinates of a location of the detected gesture; an a digital camera. The digital camera includes an image sensor, a lens to form an optical image on the image sensor, a viewfinder module configured to display on the touch sensitive screen a scene at which the lens is aimed, and a priority module coupled to the detector. The priority module is configured to translate the stored coordinates to a selected area of a digital image of the scene that is being displayed on the touch sensitive screen by the viewfinder module, contract or expand the selected area in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively, and apply an automatic image capture parameter adjustment process that gives priority to the selected area for taking a picture of the scene. Thus, a multi-touch pinch or spread gesture may define the hint or priority area, for calculating exposure parameters.

In some embodiments, a method is performed at a handheld electronic device having a built-in digital camera and a touch sensitive screen. The method includes detecting an initial finger gesture by a user on the touch sensitive screen, wherein the touch sensitive screen serves as part of an electronic viewfinder of the camera; storing coordinates of the initial finger gesture; detecting a closed path on the touch sensitive screen that includes the location of the detected initial finger gesture, wherein the user's finger moves while remaining in contact with the touch sensitive screen to define the closed path, and storing coordinates of the closed path. The method also includes translating the stored coordinates of the closed path to a selected portion of an image that is captured by the camera and that is being displayed on the touch sensitive screen; and applying an automatic image capture parameter adjustment process that gives priority to the selected portion.

In some embodiments, an apparatus is provided, which comprises a handheld electronic device configured to operate at least in a digital camera mode and a mobile telephone mode. The digital camera mode is configured to permit a user of the apparatus to take a digital picture of a scene, while the mobile telephone mode is configured to permit the user of the apparatus to participate in a wireless telephone call and hear the call through a built-in receiver of the apparatus. Further, the apparatus has a button exposed to the user that alternatively controls loudness of the built-in receiver when the apparatus is operating in the mobile telephone mode, and the button acts as shutter button when the apparatus is operating in the digital camera mode.

Other embodiments are also described.

The above summary does not include an exhaustive list of all aspects of the present invention. It is contemplated that the invention includes all systems and methods that can be practiced from all suitable combinations of the various aspects summarized above, as well as those disclosed in the Detailed Description below and particularly pointed out in the claims filed with the application. Such combinations may have particular advantages not specifically recited in the above summary.

BRIEF DESCRIPTION OF THE DRAWINGS

The embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings in which like references indicate similar elements. It should be noted that references to “an” or “one” embodiment of the invention in this disclosure are not necessarily to the same embodiment, and they mean at least one.

FIG. 1 shows a portable handheld device having a built-in digital camera and a touch sensitive screen, in the hands of its user undergoing a single finger gesture during a still image capture process.

FIG. 2 is a flow diagram of operations in the electronic device during a still image capture process, in accordance with FIG. 1.

FIG. 3 shows the portable handheld electronic device undergoing a multi-finger gesture during a still image capture process.

FIG. 4 is a flow diagram of operations in the electronic device during a still image capture process, in accordance with FIG. 3.

FIG. 5 illustrates another embodiment of the invention, where the user draws a polygon through a single finger touch gesture, to define the priority area for image capture.

FIG. 6 is a flow diagram of still image capture in the electronic device, in accordance with the embodiment of FIG. 5.

FIG. 7 shows a block diagram of an example, portable handheld multifunction device in which an embodiment of the invention may be implemented.

DETAILED DESCRIPTION

In this section several preferred embodiments of this invention are explained with reference to the appended drawings. Whenever the shapes, relative positions and other aspects of the parts described in the embodiments are not clearly defined, the scope of the invention is not limited only to the parts shown, which are meant merely for the purpose of illustration.

FIG. 1 shows a portable handheld electronic device 100 having a built-in digital camera and a touch sensitive screen 104 in the hand of its user, undergoing a finger gesture during a still image capture process. In this example, the portable device 100 is shown while it is held in the user's left hand 107, and the user's right hand 109 is making the finger gesture on the touch screen. The device 100 may be an IPHONE device by Apple Inc., of Cupertino, Calif. Alternatively, it could be any other portable handheld electronic device that has a built-in digital camera and a touch sensitive screen. The built-in digital camera includes a lens 103 located in this example on the back face of the device 100. The lens may be a fixed optical lens system or it may have focus and optical zoom capability. Although not depicted in FIG. 1, inside the device 100 are an electronic image sensor and associated hardware circuitry and running software that can capture a digital image of a scene 102 that is before the lens 103.

The digital camera functionality of the device 100 includes an electronic or digital viewfinder (also referred to as a preview function). The viewfinder displays live, captured video of the scene 102 that is before the camera, on a portion of the touch sensitive screen 104 as shown. In this case, the digital camera also includes a soft or virtual shutter button whose icon 105 is displayed by the screen 104, directly below the viewfinder image area. As an alternative or in addition, a physical shutter button may be implemented in the device 100. The device 100 includes all of the needed circuitry and/or software for implementing the digital camera functions of the electronic viewfinder (726, FIG. 7), shutter release, and automatic image capture parameter adjustment as described below.

In FIG. 1, the user performs a single-finger gesture on the touch sensitive screen 104 as shown. In this example, the finger gesture is formed by the user's right hand 109 (although it could just as well be the user's left hand 107). The user positions the single-finger touch gesture on a preview portion of the touch screen. The device 100 has detected this touch down and has automatically drawn a marker 96 (in this case, the closed contour that has a box shape), centered around the location of the touch down. The user then moves her right hand 109 around the preview portion of the touch screen, to a location of the image of the scene 102 that corresponds to an object in the scene (or some portion of the scene) to which priority should be given when the digital camera adjusts the image capture parameters in preparation for taking a picture of the scene. For example, the user may move the marker 96 from up above the mountains and the trees down towards a location near the ground or where a man is walking. After the marker has been dragged to the desired portion of the scene where the user wants the camera to give priority, the user may lift off her finger gesture, which in turn signals the camera to accept the final location of the marker and the underlying portion of the image as the priority area of the scene. Once the user has finalized the selection of this priority area, he can command the digital camera to take a picture, after adjusting the image capture parameters to give priority to the selected area. This may be done by, for example, lifting her finger off the touch sensitive display screen which not only finalizes the location of hint area but also automatically signals the device to take the picture after adjusting the parameters. A flow diagram of operations for taking the digital picture, in accordance with the above, is shown in FIG. 2.

Referring now to FIG. 2, after having powered on the device 100 and placed it in digital camera mode, a view finder function begins execution which displays video of the scene 102 that is before the camera lens 103 (block 22). The user aims the camera lens so that the desired portion of the scene appears on the preview portion of the screen 104. While monitoring the screen, a camera application (or a touch screen application) running in the device 100 detects a single-finger touch gesture and stores screen coordinates of its location (block 24). A marker 96 is then automatically displayed around the screen location of the touch gesture (block 26). The marker 96 is moved around the preview portion of the touch screen, in lock step with the user moving her finger touch gesture along the surface of the touch screen (block 28). An area of the image of the scene being shown in the preview portion and that underlies the final location of the marker is defined to be an area selected by the user for priority (block 29). This priority area may be finalized, for example, in response to the user lifting off her finger. The priority area of the image may be a fixed chunk of pixels that are about coextensive with the boundary of the marker 96. Alternatively, the priority area may be an object in the scene located at or near the marker 96, as detected by the camera application using digital image processing techniques.

Once the selected area has been determined, an automatic image capture parameter adjustment process is applied by the device 100 to give priority to the selected area (block 30). Additional details of this process will be explained below. Once the parameters have been adjusted in block 30, the picture can be taken, for example, when the user gives the shutter release command (block 32). Several ways of defining the shutter release command are also described below. Thus, the process described above gives the user finer control of picture taking adjustments.

In some embodiments, during the automatic image capture parameter adjustment process, the marker 96 is displayed in a variable state to indicate that one or more parameters are being adjusted.

For example, in some embodiments, the marker 96 is displayed in an alternating sequence of colors, such as white, blue, white, blue, while the automatic image capture parameter adjustment process sets priority to the selected area (e.g., the marker 96 changes color while the camera is focusing). In some embodiments, the display of marker 96 includes an animation of the boundary of the marker oscillating or “wiggling” on screen while the automatic image capture parameter adjustment process gives priority to the selected area under the location of the marker.

In some embodiments, after the automatic image capture parameter adjustment process is completed, display of marker 96 is terminated.

In FIG. 3, another embodiment of the invention is shown where the user defines the priority area this time by a multi-finger gesture. In this example, the multi-finger gesture is also formed by the user's right hand 109 (although it could just as well be the user's left hand 107, while the device is held by the user's right hand 109). In particular, the thumb and index finger are brought close to each other or touch each other, simultaneously with their tips being in contact with the surface of the screen 104 to create two contact points thereon. The user positions this multi-touch gesture, namely the two contact points, at a location of the image of the scene 102 that corresponds to an object in the scene (or portion of the scene) to which priority should be given when the digital camera adjusts the image capture parameters in preparation for taking a picture of the scene. In this example, the user has selected the location where a person appears between a mountainside in the background and a tree in the foreground.

In response to detecting the multi-touch finger gesture, the device 100 may cause a contour 106, in this example, the outline of a box, to be displayed on the screen 104, around the location of the detected multi-finger gesture. The contour 106 is associated, e.g. by software running in the device 100, with a taken or selected priority area of the image (to which priority will be given in the image capture parameter adjustment process). The user can then contract or expand the size of the priority area, by making a pinching movement or a spreading movement, respectively, with her thumb and index fingers of her right hand 109 while the fingertips remain in contact with the touch sensitive screen 104. The device 100 has the needed hardware and software to distinguish between a pinching movement and a spreading movement, and appropriately contracts or expands the size of the priority area.

Once the user has finalized the selected area to which priority is to be given, he can command the digital camera to take a picture after adjusting the image capture parameters to give priority to the selected area. This may be done by, for example, lifting her fingers off of the touch sensitive display screen 104 and then actuating the shutter release button (e.g., touching and then lifting off the soft shutter button icon 105). If instead the user would like default image capture parameter values to be used, then he would simply actuate the generic shutter button icon 105 without first touching the preview of the scene that is being displayed. Several alternatives to this process will now be described.

Turning now to FIG. 4, a flow diagram of operations for taking a digital picture using the device 100, in accordance with an embodiment of the invention is shown. After having powered on the device 100 and placed it in digital camera mode, an electronic viewfinder function begins execution which displays video of the scene 102 that is before the camera lens 103 (block 402). The user can now aim the camera lens so that the desired portion of the scene appears on the touch sensitive screen 104. While monitoring the screen, a camera application (or a separate touch screen application) running in device 100 detects a multi-finger touch gesture, and stores screen coordinates of its location (block 404). These screen coordinates are then translated to refer to a corresponding area of an image of the scene (block 406). Thus, for example, referring back to FIG. 3, the screen location of the contour 106 is compared to the pixel content of the displayed image, within and near that contour (or underlying the contour), to determine that an object, in this case an image of a man walking, is present in that location. The pixels that make up the man thus become the selected or taken area of the image of the scene.

Next, the device 100 may contract or expand the selected area, in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively while remaining in contact with the touch sensitive screen (block 408). Thus, in the example of FIG. 3, the user can expand the selected area, to include more pixels of the image, by spreading the index finger and thumb of the right hand 109, while they are in contact with the screen. This may be reflected by the device 100 enlarging the contour 106 that is being displayed. The device 100 detects the spreading movement and in response allocates more pixels to the selected area, for example, equally in all directions. In a similar manner, the device 100 will contract the selected area (i.e., allocate fewer pixels of the image to define the selected area) in response to detecting that the user's fingers are undergoing a pinching movement, that is, the thumb and index finger are moved closer towards each other.

With the selected area having been determined in this manner (block 408), the device 100 next applies an automatic image capture parameter adjustment process that gives priority to the selected area (block 410). This process may include making automatic adjustments to focus, exposure, and color correction (e.g., white balance) parameters. These are sometimes referred to as 3A adjustments. The adjusted parameters will be applied by the camera when “taking the next picture” of a scene that is before it. Focus adjustment may include making movement adjustments to an optical lens system of the device 100, so as to focus on the selected area of the scene. Exposure adjustments include changing the exposure time or integration time of the entire image sensor or portions of the image sensor, based upon characteristics of the selected area including its brightness (rather than that of the entire image). Similarly, adjustments to the color correction parameters may include changing the parameters used to apply a white balance algorithm to a raw image obtained from the image sensor of the camera (that will ultimately become the “picture” of the scene). As described above, in some embodiments, during the automatic image capture parameter adjustment process, the marker 96 is displayed in a variable state to indicate that one or more parameters are being adjusted.

Once the parameters have been adjusted in block 410, the picture is taken when the user gives the shutter release command. There are several ways of actually completing the process of taking the picture, in response to the device 100 detecting that a shutter release command has been invoked (block 412). For example, output from the image sensor may not be accepted until after having detected that the multi-finger gesture has been lifted off the touch sensitive screen. In other words, the camera takes the shot only after the user lifts her fingers off the touch screen. The picture is, of course, taken using the image capture parameters that have been adjusted to give priority to the selected area.

In another embodiment, the picture is taken only after expiration of a timer that was set upon the parameters having been adjusted. For example, a shutter button may be depressed half-way by the user, to signify that the image capture parameters be adjusted to give priority to the selected area, and then after the parameters have been adjusted, the device 100 waits a predetermined time interval before accepting the user's command to take the shot (e.g., upon the user pressing the shutter button the rest of the way).

In another embodiment, the camera function of the device 100 tracks movement of an object in the scene, that has been captured in the selected area of the image, as the object and the device 100 move relative to each other and while the multi-touch gesture is still present on the touch screen. The camera could, for example, maintain focus on the moving object only so long as the multi-touch gesture is on the screen, and then at the moment the multi-touch gesture has lifted off the screen, a still picture of the moving object is taken. Alternatively, focus would be maintained even after the multi-touch gesture has lifted off, and then the picture of the moving object is taken when a separate virtual or physical shutter button is actuated by the user.

The above-described process, for the use of a multi-touch pinch or spread gesture to define the hint or priority area for calculating exposure parameters, may be extended to complex scenes in which there may be two or more distinct priority areas that the user would like to define. For example, the user may wish to select for priority both a dark shadow portion and a medium tone portion, but not a bright portion, of the scene.

Referring now to FIGS. 5 and 6, another embodiment of the invention is now described for the portable handheld electronic device having a built-in digital camera and touch sensitive screen. In this embodiment, the user can draw an arbitrarily sized and shaped contour 306 on the touch screen 104, around the priority or hint area of the scene that is being displayed by the viewfinder. Referring now to FIG. 6, the device 100 captures and displays live video of the scene 102 to which the camera lens 103 is pointed, using a digital viewfinder function on the touch sensitive screen 104 (block 602). While showing this live video, the device 100 monitors the screen 104 for a single finger touch gesture. The initial single finger touch gesture is then detected and screen coordinates of its location are stored, while monitoring the screen (block 604). Next, as the user's finger moves, while remaining in contact with the screen, to define a closed path, the closed path is detected on the touch sensitive screen and coordinates of the closed path are stored (block 606). Essentially simultaneously, the contour 306 is being drawn on the preview area of the touch screen that underlies the user's finger touch. The detected closed path may include the location of the detected initial finger gesture. The stored coordinates of the closed path are translated to a portion of the image that is being displayed on the screen 104 (block 608). Thus, as seen in FIG. 5, the user's finger is tracing a closed path, which is depicted by a contour 306, that is surrounding an image of a man walking in the scene. The stored screen coordinates of the closed path are translated to the selected area of the image of the scene, i.e. a graphical object representing the walking man (block 608). The rest of the process may be as described above, namely, the application of an automatic image capture parameter adjustment process that gives priority to the selected area defined within the contour or path 306 (block 610), and taking a picture of the scene using the parameters as they have been adjusted to give priority to the selected area of the scene, in response to detecting a shutter release command (block 612).

As above, the shutter release command may include simply the act of the user lifting off her finger, following the initial single finger touch gesture and the tracing of the contour 306. As an alternative, the user may simply wish to accept the default image capture parameter values available in the device 100, and so may take the picture by pressing the generic shutter button (e.g., icon 105 or physical menu button 108 of the device 100), without first touching the preview area of the touch screen 104.

In another embodiment of the invention, the user can zoom in and zoom out of the preview of the scene, using multi-touch pinch and spread gestures, respectively. In other words, with the electronic view finder function of the camera running, so that the touch screen 104 has a preview portion that is displaying live video of the scene to which the camera lens is pointed, a multi-finger gesture is detected on the touch screen. The preview portion then displays either a zoomed-in or zoomed-out version of the scene, in response to the user's fingers undergoing a spreading movement or a pinching movement on the touch screen. Thereafter, the user can lift off the multi-touch gesture and reinitiate a second multi-touch gesture or a single touch gesture, that selects an area of the zoomed in (or zoomed out) preview for purposes of 3A adjustment. In other words, zooming in or zooming out is performed immediately prior to selecting the priority area of the previewed scene, and just prior to taking a picture of the scene (according to the zoom setting and priority area selected). Note that zooming into or out of the scene may be implemented using an optical zoom capability of the device 100, and/or a digital zoom capability.

In yet another embodiment of the invention, a volume control button of the device 100 can also be used as a shutter release button. Referring back to FIG. 1, the portable handheld electronic device 100 has at least two modes of operation, namely a digital camera mode and a mobile telephone mode. In the camera mode, a user of the device is to point the lens 103 of the device to a scene and then take a digital picture of the scene using the built-in camera functionality. In the telephone mode, the user is to participate in a wireless telephone call and will hear the call through a built-in receiver 112 (ear speaker phone) of the device. The device 100 also has a button 110, in this case, located on a side of the external enclosure of the device 110 as shown (as opposed to its front face, its back face, and its top and bottom sides). The button 110 is exposed to the user and in the telephone mode controls the loudness of the receiver 112. In the camera mode, however, the button 110 acts as a generic shutter button. The button 110 may be any one of a variety of different types, and generally is actuated by the user in different directions to increase and decrease, respectively, the loudness of the receiver 112 in the telephone mode. Shutter release occurs in this case when the button is actuated by the user in either direction (while in the camera mode). Thus, in camera mode, there may be two generic shutter buttons available for the user where either one can be used to take a picture, namely the shutter button 110 (which also acts as the loudness or volume button in telephone mode), and the virtual shutter button icon 105 which is positioned immediately below the preview image area of the touch screen 104.

Turning now to FIG. 7, a block diagram of an example portable, handheld electronic device 100 is shown, in accordance with an embodiment of the invention. The device 100 may be a personal computer, such as a laptop, tablet, or handheld computer. Alternatively, the device 100 may be a cellular phone handset, personal digital assistant (PDA), or a multi-function consumer electronic device, such as the IPHONE device.

The device 100 has a processor 704 that executes instructions to carry out operations associated with the device 100. The instructions may be retrieved from memory 720 and, when executed, control the reception and manipulation of input and output data between various components of device 100. Although not shown, the memory 720 may store an operating system program that is executed by the processor 704, and one or more application programs are said to run on top of the operating system to perform different functions described below. The touch sensitive screen 104 displays a graphical user interface (GUI) to allow a user of the device 100 to interact with various application programs running in the device 100. The GUI displays icons or graphical images that represent application programs, files, and their associated commands on the screen 104. These may include windows, fields, dialog boxes, menus, buttons, cursors, scrollbars, etc. During operation, the user can select and activate various graphical images to initiate functions associated therewith.

The touch screen 104 also acts as an input device, to transfer data from the outside world into the device 100. This input is received via, for example, the user's finger touching the surface of the screen 104. The screen 104 and its associated circuitry recognizes touches, as well as the position and perhaps the magnitude of touches and their duration on the surface of the screen 104. These may be done by a gesture detector program 722 that may be executed by the processor 704. Note that a dedicated processor may be provided to process touch inputs, in order to reduce demand for a main processor of the system. The touch sensing capability of the screen 104 may be based on technology such as capacitive sensing, resistive sensing, or other suitable solid state technologies. The touch sensing may be based on single point sensing or multi-point or multi-touch sensing. Single point touch sensing is capable of only distinguishing a single touch, while multi-point sensing is capable of distinguishing multiple touches that occur at the same time.

The input device aspect of the touch screen 104 may be integrated with its display device. The input device may be positioned “on top of” the display device, so that the user can manipulate the GUI directly by, for example, placing her finger on top of an object that is being displayed, in order to control that object. Note that this is different than how a touchpad works, because in a touchpad there is no one-to-one relationship such as this. With touchpads, the input device is not aligned with the display device, and the two are sometimes in different planes altogether. Additional details concerning the touch sensitive screen 104 and operation of the gesture detector 722 to detect user gestures (in this case, single and multi-touch finger gestures) are described in U.S. Patent Application Publication No. 2006/0026535, entitled “Mode-Based Graphical User Interfaces for Touch Sensitive Input Devices”. The gesture detector 722 recognizes the occurrence of gestures and informs one or more software agents running in the device 100 of these gestures and/or what actions to take in response to such gestures. A gesture may be identified as a command for performing certain action in an application program, and in particular, a camera application as described below.

A wide range of different gestures may be defined and used. For example, a static gesture does not involve motion, while a dynamic gesture is one that includes motion, e.g. movement of a single or multi-touch point on the screen 104. A continuous gesture is one that is performed in a single stroke in contact with the screen 104, whereas a segmented gesture is one that is performed in a sequence of distinct steps or strokes, including at least one lift off from the touch screen 104. In addition, the device 100 may recognize a gesture and take an associated action at essentially the same time as the gesture, that is, the gesture and the action simultaneously occur side-by-side rather than being a two-step process. For example, during a scrolling gesture, the graphical image of the screen moves in lock step with the finger motion. In another example, an object presented on the display device continuously follows the gesture that is occurring on the input device, that is, there is a one-to-one relationship between the gesture being performed and the object shown on the display portion. For example, during a zooming gesture, fingers may spread apart or close together (pinch) in order to cause the object shown on the display to zoom in during the spread and zoom out during the close or pinch. These are controlled by the processor 704 executing instructions that may be part of the gesture detector program 722, or another application program such as a priority (camera) application program 728.

Still referring to FIG. 7, camera functionality of the device 100 may be enabled by the following components. A solid state image sensor 706 is built into the device 100 and may be located at a focal plane of an optical system that includes the lens 103. An optical image of a scene before the camera is formed on the image sensor 706, and the sensor 706 responds by capturing the scene in the form of a digital image or picture consisting of pixels that will then be stored in memory 720. The image sensor 706 may include a solid state image sensor chip with several options available for controlling how an image is captured. These options are set by image capture parameters that can be adjusted automatically, by the priority (camera) application 728. The priority application 728 can make automatic adjustments, that is without specific user input, to focus, exposure and color correction parameters (sometimes referred to as 3A adjustments) based on a hint or priority portion of the scene that is to be imaged. This selected or target area may be computed by the priority application 728, by translating the stored coordinates of the detected gesture to certain pixel coordinates of a digital image of the scene that is being displayed at the moment of the touch gesture occurring. The priority application 728 may contract or expand this selected area in response to receiving an indication from the gesture detector 722 that the user's fingers are undergoing a pinch or spread movement, respectively. Once the selected area (hint) has been finalized, the priority application 728 will apply an automatic image capture parameter adjustment process that adjusts one or more image capture parameters, to give priority to the selected area for taking a picture of the scene.

Still referring to FIG. 7, the device 100 may operate not just in a digital camera mode, but also in a mobile telephone mode. This is enabled by the following components of the device 100. An integrated antenna 708 that is driven and sensed by RF circuitry 710 is used to transmit and receive cellular network communication signals from a nearby base station (not shown). A mobile phone application 724 executed by the processor 704 presents mobile telephony options on the touch sensitive screen 104 for the user, such as a virtual telephone keypad with call and end buttons. The mobile phone application 724 also controls at a high level the two-way conversation in a typical mobile telephone call, by allowing the user to speak into the built-in microphone 714 while at the same time being able to hear the other side of the conversation through the receive or ear speaker 112. The mobile phone application 724 also responds to the user's selection of the receiver volume, by detecting actuation of the physical volume button 110. Although not shown, the processor 704 may include a cellular base band processor that is responsible for much of the digital audio signal processing functions associated with a cellular phone call, including encoding and decoding the voice signals of the participants to the conversation.

The device 100 may be placed in either the digital camera mode or the mobile telephone mode, in response to, for example, the user actuating a physical menu button 108 and then selecting an appropriate icon on the display device of the touch sensitive screen 104. In the telephone mode, the mobile phone application 724 controls loudness of the receiver 112, based on a detected actuation or position of the physical volume button 110. In the camera mode, the priority (camera) application 728 responds to actuation of the volume button 110 as if the latter were a physical shutter button (for taking pictures). This use of the volume button 110 as a physical shutter button may be an alternative to a soft or virtual shutter button whose icon is simultaneously displayed on the display device of the screen 104 during camera mode (see, e.g. FIG. 3, where icon 105 may be a generic virtual shutter button (default exposure parameters) and is displayed below the preview portion of the display device of the touch sensitive screen 104).

An embodiment of the invention may be a machine-readable medium having stored thereon instructions which program a processor to perform some of the operations described above. In other embodiments, some of these operations might be performed by specific hardware components that contain hardwired logic. Those operations might alternatively be performed by any combination of programmed computer components and custom hardware components.

A machine-readable medium may include any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer), not limited to Compact Disc Read-Only Memory (CD-ROMs), Read-Only Memory (ROMs), Random Access Memory (RAM), Erasable Programmable Read-Only Memory (EPROM), and a transmission over the Internet.

The invention is not limited to the specific embodiments described above. For example, the multi-finger touch down may be defined as a set of one or more predetermined patterns detected in the input device of the touch sensitive screen 104. For example, a particular pattern may be defined for the joint tips of the index finger and thumb of the same hand, being pressed against the touch screen, for a certain interval of time. Alternatively, a pattern may be defined by the tips of the index finger and thumb being spaced apart from each other and held substantially in that position for a predetermined period of time. There are numerous other variations to different aspects of the invention described above, which in the interest of conciseness have not been provided in detail. Accordingly, other embodiments are within the scope of the claims.

Claims

1. A method, comprising:

at a handheld electronic device having a built-in digital camera and a touch sensitive screen: detecting a multi-finger gesture on the touch sensitive screen, wherein the touch sensitive screen is serving as part of an electronic viewfinder of the camera; storing coordinates of a location corresponding to the detected multi-finger gesture; translating the stored coordinates to a selected area of an image that is captured by the camera and that is being displayed on the touch sensitive screen; contracting or expanding the selected area in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively, while the detected multi-finger gesture remains in contact with the touch sensitive screen; and applying an automatic image capture parameter adjustment process that gives priority to the selected area.

2. The method of claim 1, wherein applying the automatic image capture parameter adjustment process includes making automatic adjustments to one or more parameters selected from the group consisting of focus, exposure, and color correction, and wherein the camera is configured to apply the automatic adjustments when taking a picture.

3. The method of claim 1, further comprising:

displaying on the touch sensitive screen a contour around the location of the detected multi-finger gesture; and
associating the contour with the selected area of the image.

4. The method of claim 1, further comprising:

tracking movement of an object captured in the selected area of the image, as the object and the device move relative to each other.

5. The method of claim 1, further comprising:

taking a picture, using image capture parameters that have been adjusted to give priority to the selected area in response to detecting that the multi-finger gesture has lifted off the touch sensitive screen.

6. The method of claim 1, further comprising:

taking a picture, using image capture parameters that have been adjusted to give priority to the selected area, in response to an expiration of a timer that was set upon the parameters having been adjusted.

7. The method of claim 1, further comprising:

detecting a further multi-finger gesture on the touch sensitive screen;
storing coordinates of a location corresponding to the further multi-finger gesture; and
translating the stored coordinates to a further selected area of the image that is captured by the camera and that is being displayed on the touch sensitive screen,
wherein the automatic image capture parameter adjustment process is applied to give priority to both the selected area and the further selected area.

8. The method of claim 7, wherein the selected area and the further selected area are two distinct user-defined priority areas.

9. The method of claim 8, wherein the selected area corresponds to a dark shadow area of the image to be captured by the camera, and the further selected area corresponds to a medium tone area of the image to be captured by the camera.

10. The method of claim 1, further comprising:

while the touch sensitive screen is displaying a scene at which the camera is pointed: detecting a multi-finger gesture made by a user on the touch sensitive screen; and zooming into or out of the scene, in response to detecting the user's fingers undergoing a spreading movement or a pinching movement on the touch sensitive screen.

11. The method of claim 10, wherein the zooming into or out of the scene comprises performing a digital zoom.

12. The method of claim 10, wherein the zooming into or out of the scene comprises performing an optical zoom.

13. The method of claim 1, further comprising:

while the automatic image capture parameter adjustment process sets priority to the selected area, displaying the marker in a variable state to indicate that one or more parameters are being adjusted.

14. A handheld electronic device, comprising:

a touch sensitive screen;
a detector configured to detect a multi-finger gesture on the touch sensitive screen and store coordinates of a location of the detected gesture; and
a digital camera, including: an image sensor, a lens to form an optical image on the image sensor, a viewfinder module configured to display on the touch sensitive screen a scene at which the lens is aimed, and a priority module coupled to the detector, wherein the priority module is configured to: translate the stored coordinates to a selected area of a digital image of the scene that is being displayed on the touch sensitive screen by the viewfinder module, contract or expand the selected area in response to the user's fingers undergoing a pinching movement or a spreading movement, respectively, and apply an automatic image capture parameter adjustment process that gives priority to the selected area for taking a picture of the scene.

15. The handheld electronic device of claim 14, wherein the application of the automatic image capture parameter adjustment process includes making at least one automatic adjustment to one or more parameters selected from the group consisting of focus, exposure, and color correction, and wherein the camera is configured to apply the automatic adjustments when taking a picture.

16. The handheld electronic device of claim 14, wherein the digital camera is configured to:

display a graphical object on the touch sensitive screen that is associated with a virtual shutter button of the digital camera; and
take the picture of the scene in accordance with image capture parameters, which are set to a default setting when the virtual shutter button is actuated, and adjusted to give priority to an area of the scene selected by the multi-finger touch gesture.

17. The handheld electronic device of claim 16, wherein the graphical object representing the virtual shutter button is displayed below the preview.

18. A method, comprising:

at a handheld electronic device having a built-in digital camera and a touch sensitive screen: detecting an initial finger gesture by a user on the touch sensitive screen, wherein the touch sensitive screen serves as part of an electronic viewfinder of the camera; storing coordinates of the initial finger gesture; detecting a closed path on the touch sensitive screen that includes the location of the detected initial finger gesture, wherein the user's finger moves while remaining in contact with the touch sensitive screen to define the closed path; storing coordinates of the closed path; translating the stored coordinates of the closed path to a selected portion of an image that is captured by the camera and that is being displayed on the touch sensitive screen; and applying an automatic image capture parameter adjustment process that gives priority to the selected portion.

19. An apparatus, comprising:

a handheld electronic device configured to operate at least in a digital camera mode and a mobile telephone mode, wherein:
the digital camera mode is configured to permit a user of the apparatus to take a digital picture of a scene, and
the mobile telephone mode is configured to permit the user of the apparatus to participate in a wireless telephone call and hear the call through a built-in receiver of the apparatus,
wherein the apparatus has a button exposed to the user that alternatively: controls loudness of the built-in receiver when the apparatus is operating in the mobile telephone mode, and acts as shutter button when the apparatus is operating in the digital camera mode.

20. The apparatus of claim 19, wherein:

the button is configured to be actuated by the user in two different directions to increase and decrease, respectively, loudness in the mobile telephone mode; and
shutter release occurs when the button is actuated by the user in either one of said directions in the digital camera mode.

21. The apparatus of claim 19, further comprising:

a built-in touch sensitive screen that serves as part of an electronic viewfinder in the digital camera mode, and
wherein the digital camera mode is configured to display a shutter release button on the touch sensitive screen.
Patent History
Publication number: 20100020221
Type: Application
Filed: Jul 23, 2009
Publication Date: Jan 28, 2010
Inventors: David John Tupman (San Francisco, CA), E-Cheng Chang (San Francisco, CA)
Application Number: 12/508,534
Classifications
Current U.S. Class: With Electronic Viewfinder Or Display Monitor (348/333.01); Touch Panel (345/173); 348/E05.093
International Classification: G06F 3/041 (20060101); H04N 5/222 (20060101); H04N 5/38 (20060101);