SYSTEM AND METHOD FOR CROPPING AND ANNOTATING IMAGES ON A TOUCH SENSITIVE DISPLAY DEVICE

The present invention is a system and method for cropping and annotating images on a touch sensitive display device or other handheld device, including the following steps: (a) displaying an image of the image file to be cropped/annotated; (b) receiving an first input from a user designating a first point in the image defining a corner of a crop/annotation rectangle; (c) receiving a second input from the user designating a second point in the image defining an opposite corner of the crop/annotation rectangle; and (d) cropping and/or annotating the image from the first point to the second point of the crop/annotation rectangle. The present invention may be used in digital cameras, Apple iPhones®, hand-held devices that inspectors may use to annotate photographs taken to substantiate statements of problems found during industrial inspections, and in other purposes.

Skip to: Description  ·  Claims  · Patent History  ·  Patent History
Description
REFERENCE TO RELATED APPLICATIONS

This application claims priority from provisional application Ser. No. 61/122,632, filed on Dec. 15, 2008, and entitled “A system, method and apparatus for inspections and compliance verification of industrial equipment using a handheld device,” the entirety of which is hereby incorporated by reference herein. This application is related to co-pending application Ser. No. 12/489,313, filed on Jun. 22, 2009, and entitled “A system and method for carrying out an inspection or maintenance operation with compliance tracking using a handheld device,” the entirety of which is hereby incorporated by reference herein.

FIELD OF THE INVENTION

The present invention is generally related to user-interfaces for touch-sensitive displays. More specifically, the instant invention relates to a system and method for capturing, cropping, and annotating images on a touch sensitive display device or other handheld device.

BACKGROUND OF THE INVENTION

Cropping and annotating images is important in graphic user interfaces (GUIs), both for manipulating images in graphics applications such as Adobe Photoshop®, Microsoft Paint®, and the like, and also for cropping and annotating images for insertion into textual documents such as Adobe Portable Document Format (PDF)®, Microsoft Word®, and the like.

Multiple end-use applications require cropping and annotating images, including reference manuals, encyclopedias, educational texts, inspection reports, and the like. For example, U.S. Ser. No. 12/489,313, filed on Jun. 22, 2009, entitled “A system and method for carrying out an inspection or maintenance operation with compliance tracking using a handheld device,” describes a method for carrying out an inspection on a piece of industrial equipment and generating inspection reports in the field. An inspector out in the field carrying out an inspection operation needs a convenient, quick, and accurate method to crop and annotate images taken in the field.

One prior method of image cropping and annotating is shown in FIG. 1. Window 1 (101) shows an image of a plant 108, which could be any image or photograph, previously stored in memory or taken live right before the cropping/annotating operation. A mouse 103 is used to click at a point on the screen 102 (shown as dashed cross hair 102), then performing a drag operation 110 (shown as dashed line 110) while holding down the mouse button to another point on the screen 106 (shown as solid cross hair 106), and then releasing the mouse button at the point 106. Points 102 and 106 represent the lower-left-hand (LLH) corner and the upper-right-hand (URH) corner, respectively, of the rectangular bounding box representing the crop area. In the prior art method, after the crop area has been selected, a user hits, selects, or otherwise operates a menu bar, selecting the crop/annotate operation, which then either crops or annotates the image using the LLH 102 and URH 106 points to define the bounding box. This operation is cumbersome and requires multiple mouse operations, and furthermore is generally only useable in the desktop environment with the availability of a peripheral device such as a mouse. It is generally not suitable for use in a handheld or other field device, or locations or situations where greater flexibility and less interactions with the GUI are highly desirable.

Applications of the present invention include digital cameras, digital video cameras, phones with built-in cameras, phones with built-in display devices, such as the Apple iPhone®, and the like. In general, the present invention may be used to provide a simple and convenient method to crop and annotate images in situations and locations where such ease is important and/or necessary.

For example, one concrete application of the present invention is related to supplying a convenient user interface for a handheld device used for industrial inspection and maintenance compliance systems, as described in related U.S. Ser. No. 12/489,313. The present invention allows an easy mechanism for on-site inspectors to quickly crop and annotate images in the field to substantiate problems found during an inspection.

One of ordinary skill in the art will find many useful applications of the present invention in which a convenient and easy way is needed to either crop or annotate images on a touch-sensitive display or other hand-held device.

It is against this background that various embodiments of the present invention were developed.

BRIEF SUMMARY OF THE INVENTION

The present invention is a system and method for cropping and annotating images on a touch sensitive display device or other handheld device.

One embodiment of the present invention is a method for cropping images, including the steps of (a) displaying an image of the image file to be cropped; (b) receiving a first input from a user designating a first point in the image defining a corner of a crop rectangle, e.g., a lower left hand corner; (c) receiving a second input from the user designating a second point in the image defining the opposite corner of the crop rectangle, e.g. an upper right hand corner; and (d) cropping the image to the crop rectangle defined by the two corners when the second input is released.

Another embodiment of the present invention is the method described above also including the step of displaying on the image a location of the first point.

Another embodiment of the present invention is the method described above also including the step of displaying a rectangle corresponding to the crop rectangle of the image before cropping the image.

Another embodiment of the present invention is the method described above where if a user does not immediately release the second input, allowing the user to drag the second point to visually edit a shape and a size of the crop rectangle.

Another embodiment of the present invention is the method described above where if a user drags the second point near an edge of the displayed image, and the image is larger than the displayed portion, then scrolling the displayed portion to show area of the image in a direction of the dragged point.

Another embodiment of the present invention is the method described above also including the step of displaying the cropped image in the display area in place of the original image.

Another embodiment of the present invention is the method described above also including the step of scaling the cropped image to fill the entire display area.

Yet another embodiment of the present invention is a method of annotating an image (where annotating an image includes superimposing one or more geometrical shapes on top of the image), the method including the steps of (a) displaying an image of the image file to be annotated; (b) receiving a first input from a user designating a first point in the image defining a corner of an annotation rectangle, e.g. the lower left hand corner; (c) receiving a second input from the user designating an opposite corner of the annotation rectangle, e.g., the upper right hand corner; and (d) annotating the image in the annotation rectangle defined by the two corners when the second input is released.

Another embodiment of the present invention is the method described above also including the step of displaying on the image a location of the first point.

Another embodiment of the present invention is the method described above also including the step of displaying a shape corresponding to the annotation of the image before annotating the image.

Another embodiment of the present invention is the method described above where if a user does not immediately release the second input, allowing the user to drag the second point to visually show a shape and a size of the annotation area.

Another embodiment of the present invention is the method described above where if a user drags the second point near an edge of the displayed image, and the image is larger than the displayed portion, then scrolling the displayed portion to show area of the image in a direction of the dragged point.

Another embodiment of the present invention is the method described above also including the step of displaying the annotated image in the display area in place of the original image.

Another embodiment of the present invention is the method described above also including the step of receiving a third input representing a type of shape and a characteristic of the shape corresponding to the annotation.

Another embodiment of the present invention is the method described above where the shape is, but is not limited to, a line, a rectangle, an ellipse, or a circle. Another embodiment of the present invention is the method described above where the characteristics of the shape include, but is not limited to, a line type, a line width, and a line color.

The present invention also includes a related system by which the method of capturing, cropping, and annotating an image could be carried out. Such a system could be implemented as a computer system, embodied in a handheld device. The system may include integrated or separate hardware components for taking of media samples and means for receiving touch input.

BRIEF DESCRIPTION OF THE DRAWINGS

The invention will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:

FIG. 1 shows a prior art method of cropping an image in a desktop or other like environment using a mouse or other like peripheral device;

FIG. 2 shows a flowchart for cropping an image using a simplified process on a handheld device, in accordance with one embodiment of the present invention;

FIG. 3A shows a first step in the process for cropping the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 3B shows a second step in the process for cropping the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 3C shows a third step in the process for cropping the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 4 shows a prior art method of annotating an image in a desktop or other like environment using a mouse or other like peripheral device;

FIG. 5 shows a flowchart for annotating an image using a simplified process on a handheld device, in accordance with another embodiment of the present invention;

FIG. 6A shows a first step in the process for annotating the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 6B shows a second step in the process for annotating the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 6C shows a third step in the process for annotating the image on the handheld device, in accordance with one embodiment of the present invention;

FIG. 7 shows a flowchart of another process of another embodiment of the present invention showing one aspect of one possible workflow using the principles of the present invention;

FIG. 8 is an illustration of a multi-functional handheld device, in which some of the software and hardware components of the present invention could reside, in accordance with yet another embodiment of the present invention;

FIG. 9 is an illustration of one of many possible use-cases of the present invention in relation to carrying out an industrial inspection operation on a wind-farm or other energy project or other like site;

FIG. 10 is an illustration of yet another possible use-case of the present invention in relation to a hand-held device with a camera and a touch-sensitive display, such as an Apple iPhone® or other like device; and

FIG. 11 is an illustration of yet another possible use-case of the present invention in relation to a hand-held camera with a touch-sensitive display.

DETAILED DESCRIPTION OF THE INVENTION

The present invention generally pertains to a system and method for capturing, cropping, and annotating images on a touch sensitive display or other handheld device.

The interface according to the principles of the present invention could have, but is not limited to, the following components. Any subsets of the following components are also within the scope of this invention. After a user captures an initial image, it is stored and displayed. No actions of the user will modify the initial image, allowing all edits to be undone or re-applied against the original image.

The user can choose to crop the image as follows:

    • 1. The user can click once on a point in the image, displaying a point where the click occurred, and then click again at another point in the image;
    • 2. A rectangle with the two clicks at opposite corners is displayed. When the user releases the second click, including immediately releasing it, this rectangle becomes the new crop rectangle;
    • 3. If the user does not immediately release the second click, they can drag the point to visually edit the shape and size of the rectangle;
    • 4. If the point is dragged near the edge of the displayed image and the image is larger than the displayed portion, then the displayed portion will scroll to show the areas in the direction of the dragged point; and
    • 5. Once selected, the new crop rectangle becomes the area displayed. The image within the selected rectangle can be scaled to the size of the viewport.

The user can choose to annotate the image as follows:

    • 1. The user can choose a type of an annotation shape (e.g., line, rectangle, ellipse), and characteristics of the annotation shape such as line type (e.g., dashed), line width, and line color, etc.;
    • 2. The user can click once on a point in the image, displaying a point where the click occurred, and then click again at another point in the image;
    • 3. An annotation shape of the appropriate type is displayed over the image with the two clicks at opposite corners of the shape's bounding rectangle. When the user releases the second click, including immediately releasing it, this shape and its location on the image are saved;
    • 4. If the user does not immediately release the second click, they can drag the point to visually edit the shape and its size; and
    • 5. If the point is dragged near the edge of the displayed image and the image is larger than the displayed portion, then the displayed portion will scroll to show the areas in the direction of the dragged point, and the portion of the shape in the displayed area will be shown.

The invention may be used in an industrial inspection compliance system with which various methods can be carried out to the effect of assisting in an inspection and providing the means for compliance verification of a proper inspection. For the purposes of the text describing this invention, an inspection may represent the process of checking a physical component for safety, security or business reasons, doing the same for compliance with industrial standards and guidelines, or a maintenance operation on a physical component for those same reasons. These methods can generally be best executed by a multi-function handheld device, carried to and used in the physical proximity of an inspection component by the inspector. Examples of multi-function handheld devices include the Apple iPhone®, the Psion Teklogix Workabout Pro®, the Motorola MC-75®, and the like, but the present invention is not limited to such devices as shown or described here. One embodiment of the inspection compliance method includes the steps of scanning unique machine-readable tags deployed at logical inspection points defined by the inspector, and assigning a timestamp to the scanning operation; taking media samples of logical inspection points defined by the inspector, and assigning a timestamp to the media sample capturing operation; reporting of sub-optimal conditions of the unique machine-readable tags deployed at logical inspection points if its condition warrants such a declaration; associating a media sample with a corresponding scan of a unique machine-readable tag; and annotating a media sample in such ways that substantiate statements of an industrial component passing inspection, or in such ways that substantiate statements of problems found with the industrial component. See U.S. Ser. No. 12/489,313 for more details of an example of an industrial inspection compliance system to which the present invention may be applied.

The invention is discussed below with reference to FIGS. 1-11. However, those skilled in the art will readily appreciate that the detailed description given herein with respect to these figures is for explanatory purposes as the invention extends beyond these limited embodiments.

FIG. 1 shows a prior art method of cropping an image in a desktop or other like environment using a mouse or other like peripheral device. Window 1 (101) shows an image of a plant 108, which could be any image or photograph, previously stored in memory or taken live right before the cropping operation. A mouse 103 is used to click at a point on the screen 102 (shown as dashed cross hair 102), then performing a drag operation 110 (shown as dashed line 110) while holding down the mouse button to another point on the screen 106 (shown as solid cross hair 106), and then releasing the mouse button at the point 106. Points 102 and 106 represent the lower-left-hand (LLH) corner and the upper-right-hand (URH) corner, respectively, of the rectangular bounding box representing the crop area. In the prior art method, after the crop area has been selected, a user hits, selects, or otherwise operates a menu bar, selecting the crop operation, which then crops the images using the LLH 102 and URH 106 points to define the bounding box. This operation is cumbersome and requires multiple mouse operations, and furthermore is generally only useable in the desktop environment with the availability of a peripheral device such as a mouse. It is generally not suitable to use in the handheld or other field device, or locations or situations where greater flexibility and less interactions with the GUI are highly desirable.

In order to solve the inherent limitations in the prior art method described in FIG. 1, the inventors have invented a novel method, system, and apparatus to facilitate on-site image cropping. FIG. 2 shows a flowchart for cropping an image using a simplified process on a handheld device, in accordance with one embodiment of the present invention. Process 200 begins at step 202, where an image is either retrieved from memory, captured in real-time via an image capture device, or in some other way provided to the process 200. In step 204, the image is displayed on the touch sensitive display or other display of the handheld device. In step 206, the user may click or tap (using a finger, a stylus, a mouse, or other device) at a LLH location where the crop is to begin. In step 208, the user may click or tap (using the finger, the stylus, the mouse, or other device) at a URH location where the crop is to end. In step 210, the image is cropped between the LLH location and the URH location. Finally, in step 212, the cropped image is displayed for the user's confirmation. At this point (not shown), the user may cancel, undo, or accept the crop. The process ends in step 214.

The process described in FIG. 2 is more particularly illustrated in relations to FIGS. 3A-3C. FIG. 3A shows the first step in the process for cropping the image on the handheld device, in accordance with the embodiment described in relation to FIG. 2. Screen 302 shows a screen or touch-sensitive area of a handheld device. Window 2 (304) shows one of many windows showing the image the user desires to crop. The user uses his or her hand 308 (or stylus, mouse, or other device) to click or tap at point 306 (shown as solid cross hair 306). The position of the click or tap 306 represents a LLH corner of the crop boundary.

FIG. 3B shows the second step in the process for cropping the image on the handheld device. After clicking or tapping on point 306 (now shown as a dashed cross hair 306), the user may move his or her hand 308 (or stylus, mouse, or other device) shown as dashed motion line 310, to another location 312 (shown as solid cross hair 312) and click or tap a second time. The second tap at location 312 represents an URH corner of the crop boundary.

FIG. 3C shows the third and final step in the process for cropping the image on the handheld device. After indicating a completion of a crop operation, such as by removing hand 308, the crop operation is performed in the background, and an updated or cropped image is displayed for the user's confirmation. Point 306 (shown as a dashed cross hair 306) and point 312 (now also shown as dashed cross hair 312) represent a LLH corner and an URH corner, respectively, of the crop boundary.

Therefore, as shown in FIGS. 2 and 3A-3C, a user of the present invention may implement a crop operation with very little hand motion and very little input into the device, which is highly desirable, or even mandatory, when operating in the field, for example, during an inspection operation.

Now turning to annotation of images, FIG. 4 shows a prior art method of annotating an image in a desktop or other like environment using a mouse or other like peripheral device. Window 1 (401) shows an image of a plant 408, which could be any image or photograph, previously stored in memory or taken live right before the cropping/annotating operation. A mouse 403 is used to click at a point on the screen 402 (shown as dashed cross hair 402), then performing a drag operation 410 (shown as dashed line 410) while holding down the mouse button to another point on the screen 406 (shown as solid cross hair 406), and finally releasing the mouse button at the point 406. Points 402 and 406 represent the lower-left-hand (LLH) corner and the upper-right-hand (URH) corner, respectively, of the rectangular bounding box representing the annotation area. In the prior art method, after the annotation area has been selected, a user hits, selects, or otherwise operates a menu bar, selecting the proper annotate operation (circle, oval, rectangle, arrow, line, etc.), which then annotates the image using the LLH 102 and URH 106 points to define the bounding box. This operation is cumbersome and requires multiple mouse operations, and furthermore is generally only useable in the desktop environment with the availability of a peripheral device such as a mouse. It is generally not suitable to use in the handheld or other field devices, or locations or situations where greater flexibility and fewer interactions with the GUI are highly desirable.

In order to solve the inherent limitations in the prior art method described in FIG. 4, the inventors have invented a novel method, system, and apparatus to facilitate on-site image annotation. FIG. 5 shows a flowchart for annotating an image using a simplified process on a handheld device, in accordance with another embodiment of the present invention. Process 500 begins at step 502, where an image is either retrieved from memory, captured in real-time via an image capture device, or in some other way provided to the process 500. In step 504, the image is displayed on the touch sensitive display or other display of the handheld device. In step 506, the user may click or tap (using a finger, a stylus, a mouse, or other device) at a LLH location where the crop is to begin. In step 508, the user may click or tap (using the finger, the stylus, the mouse, or other device) at an URH location where the annotatio is to end. In step 510, the image is annotated between the LLH location and the URH location. Finally, in step 512, the annotated image is displayed for the user's confirmation. At this point (not shown), the user may cancel, undo, or accept the annotation. The process ends in step 514.

The process described in FIG. 5 is more particularly illustrated in relations to FIGS. 6A-6C. FIG. 6A shows the first step in the process for annotating the image on the handheld device, in accordance with the embodiment described in relation to FIG. 5. Screen 602 shows a screen or touch-sensitive area of a handheld device. Window 2 (604) shows one of many windows showing the image the user desires to annotate. The user uses his or her hand 608 (or stylus, mouse, or other device) to click or tap at point 606 (shown as solid cross hair 606). The position of the click or tap 606 represents an LLH corner of the crop boundary.

FIG. 6B shows the second step in the process for annotating the image on the handheld device. After clicking or tapping on point 606 (now shown as a dashed cross hair 606), the user may move his or her hand 608 (or stylus, mouse, or other device) shown as dashed motion line 610, to another location 612 (shown as solid cross hair 612) and click or tap a second time. The second tap at location 612 represents an URH corner of the crop boundary.

FIG. 6C shows the third and final step in the process for annotating the image on the handheld device. After indicating a completion of an annotation operation, such as by removing hand 608, an updated or annotated image is displayed for the user's confirmation. Point 606 (shown as a dashed cross hair 606) and point 612 (now also shown as dashed cross hair 612) represent a LLH corner and an URH corner, respectively, of the annotation boundary.

Therefore, as shown in FIGS. 5 and 6A-6C, a user of the present invention may implement an annotation operation with very little hand motion and very little input into the device, which is highly desirable, or even mandatory, when operating in the field, for example, during an inspection operation.

FIG. 7 shows a flowchart of another process of another embodiment of the present invention showing one aspect of one possible workflow using the principles of the present invention. Process 700 begins at step 702, where the user of a handheld device edits an image on the device. First, at step 704 the user opens the image for viewing and then at step 706, the user makes an annotation on the image in the spirit of process 500 shown in FIG. 5. The user then proceeds to step 708, where he or she crops the image using steps described in process 200 shown in FIG. 2. Then, after cropping the image, the user now sees only a sub-area of the original image on the screen, as a result of step 212 of FIG. 2 whereby the cropped area is displayed to take up the full screen area of the device. At this point, the user decides that what he or she would like to annotated in step 706 was not correct, so he or she reverses that annotation at the click of an UNDO button in step 710. Then in step 712, the user reverses the crop he or she made in step 708, by which point the image shown to the user looks exactly as it was in step 704. Finally, the user crops the image in step 714 in a different fashion from the one the user did previously in step 708, but once again using the steps described in process 200 shown in FIG. 2. Then in steps 716 and 718, the user makes two consecutive annotations in the spirit of process 500 shown in FIG. 5. The user is then satisfied with the edits he or she has made and ends the process at step 720.

The result of the series of actions illustrated in FIG. 7 may be stored by storing a reference to the original image, storing a final crop rectangle by reference to a LLH and an URH corner, and storing a list of annotations which are also stored by reference to a LLH and an URH corner along with type information, such as annotation type, annotation color, etc. to overlay on the cropped image.

FIG. 8 is an illustration of a multi-functional handheld device 800, in which some of the software and hardware components of the present invention could reside, in accordance with yet another embodiment of the present invention. The handheld device 800 contains a screen or display 802, which may be a touch-sensitive display, for displaying an image to be cropped and/or annotated with overlaid objects. The handheld 800 also contains a toolbar 806 that contains iconographic buttons for each function that a user may execute during a process of taking and editing an image. Some possible selectable actions include, but are not limited to, from top to bottom and left to right, “take a picture” 804 (first row, far left), undo, redo, zoom-in, zoom-out (first row, far right), delete/cancel (second row, far left), annotate with an arrow, annotate with a circle, annotate with a rectangle, annotate with a line, and crop (second row, far right). For example, if button 804 is pressed, the software activates the handheld device's digital camera and places the captured image in display screen 802.

The illustrative user interface 800 is but one of many possible illustrative embodiments of the present invention. One of ordinary skill in the art would appreciate that any other configuration of objects in a user interface, as well as any possible extensions to the set of functions presented in the user interface 800, are all within the spirit and scope of the present invention.

FIG. 9 is an illustration of one of many possible use-cases of the present invention in relation to carrying out an industrial inspection operation on a wind-farm or other energy project or other like site. FIG. 9 shows an inspector carrying out an inspection of wind turbine 902 and wind turbine 904. The inspector 906 is standing next to the tower and foundation sections of wind turbine 904. The inspector 906 is using an industrial inspection handheld device 908. Inspector 906 is more specifically in the process of using the industrial inspection handheld device 908, even more specifically having an embedded RFID reader, to scan RFID tag 912 on tower section of wind turbine 904, via radio frequency communication channel 910. Since inspector 906 is within proximity of the inspected component, he is able to successfully scan the RFID tag 912 because it is within the range of radio frequency communication channel 910. If the inspector recognizes a potential problem with the foundation section of the wind turbine 904, the inspection may take a picture of the potential problem area, and then proceed to crop and annotate the problem area using the methods described in the present application. Since the inspector is in the field, the present invention is particularly suitable to helping the inspector complete the inspection in a timely, accurate, and cost effective manner.

The illustration shown in FIG. 9 is but one of many possible illustrative embodiments of the usage of the present invention. One of ordinary skill in the art would appreciate that many possible uses of the present invention are all within the spirit and scope of the present invention, including, but not limited to, renewable energy systems and distributed energy systems, including wind turbines, solar photovoltaic, solar thermal plants, co-generation plants, biomass-fueled power plants, carbon sequestration projects, enhanced oil recovery systems, and the like.

FIG. 10 is an illustration of yet another possible use-case of the present invention in relation to a hand-held device with a camera and a touch-sensitive display, such as an Apple iPhone® 1000 or other like device. Users of an Apple iPhone® 1000 may wish to crop and/or annotate an image either taken by the iPhone® 1000 or received from another user, or in some other way obtained on the iPhone® 1000. The present invention is particularly suitable for use with an iPhone®, since an iPhone® as currently practiced does not contain a useful or easy mechanism for cropping or annotating images.

FIG. 11 is an illustration of yet another possible use-case of the present invention in relation to a hand-held camera with a touch-sensitive display 1100. Users of a digital camera 1100 may wish to crop and/or annotate an image taken by the digital camera 1100. The present invention is particularly suitable for use with a digital camera, especially a digital camera as shown in FIG. 11 with a touch-sensitive display device 1100, since digital cameras as currently practiced do not contain a useful or easy mechanism for cropping or annotating images on-site and instead require uploading the images to a computer for further desktop processing to crop and annotate the images.

While the methods disclosed herein have been described and shown with reference to particular operations performed in a particular order, it will be understood that these operations may be combined, sub-divided, or re-ordered to form equivalent methods without departing from the teachings of the present invention. Accordingly, unless specifically indicated herein, the order and grouping of the operations is not a limitation of the present invention.

While the invention has been particularly shown and described with reference to embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope of the invention.

Claims

1. A method for cropping an image file, comprising the steps of:

displaying an image of the image file to be cropped in a display area;
receiving a first input from a user designating a first point in the image defining a corner of a crop rectangle;
receiving a second input from the user designating a second point in the image defining an opposite corner of the crop rectangle; and
cropping the image to the crop rectangle defined by the first point and the second point to create a cropped image when the second input is released.

2. The method as recited in claim 1, further comprising:

displaying on the image a location of the first point.

3. The method as recited in claim 1, further comprising:

displaying a rectangle overlaid over the image corresponding to the crop rectangle before cropping the image.

4. The method as recited in claim 1, wherein if the user does not immediately release the second input, allowing the user to drag the second point to visually edit a shape and a size of the crop rectangle.

5. The method as recited in claim 1, wherein if the user drags the second point near an edge of a displayed portion and the image is larger than the displayed portion, then scrolling the displayed portion to show a portion of the image in a direction of the dragged point.

6. The method as recited in 1, further comprising:

displaying the cropped image in the display area in place of the original image.

7. The method as recited in claim 6, further comprising:

scaling the cropped image to fill the entire display area.

8. A method of annotating an image file, comprising the steps of:

displaying an image of the image file to be annotated in a display area;
receiving a first input from a user designating a first point in the image defining a corner of an annotation rectangle;
receiving a second input from the user designating a second point in the image defining an opposite corner of the annotation rectangle; and
annotating the image from the first point to the second point of the annotation rectangle to create an annotated image when the second input is released.

9. The method as recited in claim 8, further comprising:

displaying on the image a location of the first point.

10. The method as recited in claim 8, further comprising:

displaying a shape corresponding to the annotation rectangle of the image before annotating the image.

11. The method as recited in claim 8, wherein if the user does not immediately release the second input, allowing the user to drag the second point to visually show a shape and a size of the annotation rectangle.

12. The method as recited in claim 8, wherein if the user drags the second point near an edge of a displayed portion and the image is larger than the displayed portion, then scrolling the displayed portion to show a portion of the image in a direction of the dragged point.

13. The method as recited in 8, further comprising:

displaying the annotated image in the display area in place of the original image.

14. The method as recited in claim 8, further comprising:

receiving a third input representing a type of shape and a characteristic of the shape corresponding to the annotation rectangle.

15. The method as recited in claim 14, wherein the shape is selected from the group consisting of a line, a rectangle, an ellipse, and a circle.

16. The method as recited in claim 14, wherein the characteristic of the shape is selected from the group consisting of a line type, a line width, and a line color.

17. A touch-sensitive hand-held system having a capability of cropping and annotating an image file, comprising:

at least one processor; and
at least one or more memories, operatively coupled to the processor, and containing program code, which when executed causes the processor to execute a process comprising the steps of: displaying an image of the image file to be cropped or annotated in a display area; receiving a first input from a user designating a first point in the image defining a corner of a crop rectangle; receiving a second input from the user designating a second point in the image defining an opposite corner of the crop rectangle; cropping the image from the first point to the second point of the crop rectangle when the second input is released to form a cropped image; displaying the cropped image in the display area in place of the original image; receiving a third input from the user designating a third point in the cropped image defining a corner of an annotation rectangle; receiving a fourth input from the user designating a fourth point in the cropped image defining an opposite corner of the annotation rectangle; and annotating the cropped image from the third point to the fourth point of the annotation rectangle when the fourth input is released to form an annotated cropped image.

18. The system as recited in claim 17, further containing program code, which when executed causes the processor to execute a process further comprising the step of:

displaying on the image a location of the first point and a location of the third point.

19. The system as recited in claim 17, further containing program code, which when executed causes the processor to execute a process further comprising the steps of:

displaying a rectangle overlaid over the image corresponding to the crop rectangle before cropping the image; and
displaying a shape overlaid over the image corresponding to the annotation rectangle before annotating the image.

20. The system as recited in claim 17, wherein if the user does not immediately release the second input or the fourth input, allowing the user to drag the second point or the fourth point to visually show a shape and a size of the crop rectangle or the annotation rectangle.

21. The system as recited in claim 17, wherein if the user drags the second point or the fourth point near an edge of a displayed portion and the image is larger than the displayed portion, then scrolling the displayed portion to show a portion of the image in a direction of the dragged point.

22. The system as recited in 17, further containing program code, which when executed causes the processor to execute a process further comprising the step of:

displaying the annotated cropped image in the display area in place of the cropped image.
Patent History
Publication number: 20100149211
Type: Application
Filed: Jul 21, 2009
Publication Date: Jun 17, 2010
Inventors: Christopher Tossing (Waltham, MA), Marc Siegel (Boston, MA), Alberto Ho (Westford, MA)
Application Number: 12/507,039
Classifications
Current U.S. Class: Rectangular Region (345/628); Shape Generating (345/441); Scrolling (345/684); Touch Panel (345/173)
International Classification: G09G 5/00 (20060101); G06T 11/20 (20060101);