SCENARIO GENERATION METHOD IN WHICH VARIOUS DATA ARE ASSOCIATED WITH EACH OTHER, SCENARIO EXECUTION METHOD IN WHICH VARIOUS DATA ARE ASSOCIATED WITH EACH OTHER, SCENARIO GENERATION DEVICE, AND SCENARIO EXECUTION DEVICE
A scenario generation device includes a processor that executes a procedure. The procedure includes: detecting data representing a target object of an operation target and data representing a user operation on the target object based on objects displayed on a screen by application software operating on a computer, detecting data representing a peripheral object positioned at the target object periphery from out of the objects displayed on the screen, and detecting a positional relationship on the screen between the target object and the peripheral object; and generating a scenario in which the data representing the user operation, the data representing the target object, the data representing the peripheral object, and data representing the positional relationship are associated with each other.
Latest FUJITSU LIMITED Patents:
- COMPUTER-READABLE RECORDING MEDIUM STORING INFORMATION PROCESSING PROGRAM, INFORMATION PROCESSING METHOD, AND INFORMATION PROCESSING APPARATUS
- OPTICAL COMMUNICATION DEVICE THAT TRANSMITS WDM SIGNAL
- METHOD FOR GENERATING DIGITAL TWIN, COMPUTER-READABLE RECORDING MEDIUM STORING DIGITAL TWIN GENERATION PROGRAM, AND DIGITAL TWIN SEARCH METHOD
- RECORDING MEDIUM STORING CONSIDERATION DISTRIBUTION PROGRAM, CONSIDERATION DISTRIBUTION METHOD, AND CONSIDERATION DISTRIBUTION APPARATUS
- COMPUTER-READABLE RECORDING MEDIUM STORING COMPUTATION PROGRAM, COMPUTATION METHOD, AND INFORMATION PROCESSING APPARATUS
This application is a continuation of U.S. patent application Ser. No. 14/456,048, filed on Aug. 11, 2014 which is based upon and claims the benefit of priority of the prior Japanese Patent Application No. 2013-178417, filed on Aug. 29, 2013, the entire contents of which are incorporated herein by reference.
FIELDThe embodiments discussed herein are related to a scenario generation method, a scenario execution method, a scenario generation device, and a scenario execution device
BACKGROUNDThere are cases in which a graphical user interface (GUI) performed with a pointer device, such as a mouse, is employed for operations of a user in application software executed on a computer. In order to check operation during user operation for application software employing a GUI, and to check operation when there is a version upgrade thereof, there are cases in which an operation sequence of a user is recorded in a file, and testing is performed by automatically executing operation in accordance with the recorded file.
For example, an operation of a user with a GUI in application software is recorded by a computer according to the operation sequence, in what is referred to as a scenario file. Then, by automatically executing on the computer operation with the GUI in accordance with the recorded scenario, operation on the application software during user operation is checked, and operation thereon when there is a version upgrade is checked.
As an example of technology to generate scenarios to automatically execute operations using a GUI, technology is known that generates a scenario, containing an operation position, an image of a range including the operation position, and data of the operation from a display image during input operation using a mouse or the like. In operation with a GUI according to the generated scenario, the image of the range including the operation position is then treated as an object of the operation target, and the mouse cursor or the like is moved to the display position of the object of the operation target, based on the recorded operation position, and the operation (for example, what is referred to as a “click”) is executed.
In order to identify the operation target displayed on a screen, technology is known to search for character data when the object of the operation target is character data. As an example of technology to generate a scenario by recording data representing a user operation and an image of a screen during user operation, technology is also known that stores in a scenario an image of a screen during operation of application software. Storing an image of a screen during user operation in a scenario enables a user to readily ascertain the operation contents, and enables easy editing of the scenario.
SUMMARYAccording to an aspect of the embodiments, there is provided a computer-readable recording medium having stored therein a program for causing a computer to execute a scenario generation process, the process including:
detecting data representing a target object of an operation target and data representing a user operation on the target object based on objects displayed on a screen by application software operating on a computer;
detecting data representing a peripheral object positioned at the target object periphery from out of the objects displayed on the screen, and detecting a positional relationship on the screen between the target object and the peripheral object; and
generating a scenario in which the data representing the user operation, the data representing the target object, the data representing the peripheral object, and data representing the positional relationship are associated with each other.
The object and advantages of the invention will be realized and attained by means of the elements and combinations particularly pointed out in the claims.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory and are not restrictive of the invention.
Detailed explanation next follows regarding examples of exemplary embodiments of technology disclosed herein, with reference to the drawings. In the present exemplary embodiment, the technology disclosed herein is applied to a case in which testing is performed of user operation with a GUI.
First Exemplary EmbodimentThe scenario device 10 includes video RAM (VRAM) 16, an input section 18, and a display section 20, such as a display. The VRAM 16 is memory to save contents displayed on the display section 20, and saves images output from the user interface application section 26. The VRAM 16 supplies images to the operation recording section 22 and the automatic operation section 24. The input section 18 executes user input, such as by a mouse, keyboard, or touch panel, and is connected to the operation recording section 22 and the user interface application section 26.
In order to generate a scenario, the scenario device 10 executes the application software 32, and starts scenario generation processing using the operation recording section 22, by the CPU 12 executing the operation recording program 28. The operation recording section 22 detects user operation on an object, displayed on the screen of the display section 20 by execution of the application software 32, by reading input values of the input section 18. The operation recording section 22 detects the target object subject to user operation by reading in an image of objects displayed on the screen of the display section 20 from the VRAM 16. The operation recording section 22 then detects a marker to identify the target object on the image displayed on the screen of the display section 20. Namely, the operation recording section 22 detects an object displayed at the periphery of the target object to serve as a peripheral object to be a marker for the target object. The operation recording section 22 detects the positional relationship between the target object and the peripheral object. Data representing the peripheral object that has been associated with data representing the positional relationship may be employed as marker data. A scenario is then generated containing, as operation data, data representing the detected target object and operation of the user, data representing a peripheral object, and data representing a positional relationship between the target object and the peripheral object. The scenario is recorded in a file 34 of the memory 14. Thus, for operation with the GUI in the application software 32, this thereby enables operation data identifying operation on a screen to be recorded as a series of data related to operation, without recording coordinates during operation.
In order to execute a recorded scenario, the scenario device 10 executes the application software 32, and starts scenario execution processing in the automatic operation section 24 by the CPU 12 executing the automatic operation program 30. The automatic operation section 24 executes operation on an object, that is being displayed on the screen of the display section 20 by execution of the application software 32, by reading in a scenario recorded in memory. First, image data is acquired of a region capable of user operation by a user on the screen of the display section 20 by execution of the application software 32. For example, there are cases in which when, during execution of the application software 32, a display region on the screen of the display section 20 is limited to inside the frame of a window, the region capable of user operation is the outside of the window frame. The automatic operation section 24 accordingly acquires, as an image, a region capable of user operation inside or outside the window frame. Then, for a target object contained in the operation data recorded in the scenario, the automatic operation section 24 identifies, from the acquired image data, the position of the target object in the image of the region capable of user operation inside or outside the window frame. Then, based on data contained in the operation data representing the user operation on the target object, the automatic operation section 24 executes the operation on the target object at the identified position. This thereby enables the operation to be identified even in cases in which a portion of the application software 32 has been changed and the operation position has moved, enabling implementation of the scenario operation without an operation error occurring.
The scenario device 10 is an example of a scenario device of technology disclosed herein, the scenario generation device 10A is an example of a scenario generation device of technology disclosed herein, the scenario execution device 10B is an example of a scenario execution device of technology disclosed herein. The operation recording program 28 is an example of a scenario generation program of technology disclosed herein, the automatic operation program 30 is an example of a scenario execution program of technology disclosed herein. The application software 32 is an example of application software of technology disclosed herein.
An Operating System (OS) 56 is stored on the storage section 52, together with an operation recording program 60 and an automatic operation program 68 to make the computer 40 function as the scenario device 10. A GUI application program 58 and a file 80 are also stored on the storage section 52. The CPU 42 causes the computer 40 to operate as the user interface application section 26 illustrated in
The operation recording program 60 stored on the storage section 52 includes an input monitoring process 61, an image capture process 62, an OCR character string acquisition process 63, an image clipping process 64, a scenario output process 65, an image search process 66, and a position relationship detection process 67. The CPU 42 reads the operation recording program 60 from the storage section 52, expands the operation recording program 60 in the memory 50, and causes the computer 40 to operate as the scenario device 10 and the scenario generation device 10A illustrated in
The automatic operation program 68 stored in the storage section 52 includes a scenario reading process 69, an image search process 70, a scroll bar detection process 71, an OCR specified character string search process 72, and a stitched image generation process 73. The automatic operation program 68 includes a position relationship detection process 74, an event emulation process 75, an image capture process 76, a scroll region detection process 77, and a rectangular region search process 78. The CPU 42 causes the computer 40 to operate as the scenario device 10 and the scenario execution device 10B illustrated in
A file 80, containing a test scenario 81, pre-recorded icons 82, and scroll bar arrow icons 83, is stored in the storage section 52. The file 80 corresponds to the file 34 illustrated in
Note that the scenario device 10 may be connectable to a computer network. Namely, the scenario device 10 is not limited to being connected to a computer network, or to not being connected to a computer network. The scenario device 10 may be implemented by a single computer 40 alone, as in the illustrated example of the scenario device 10, or may be implemented by plural computers.
In order to implement automatic operation according to a scenario in which user operation with the GUI is stored, an operation position, such as coordinates, on a screen is generally recorded in a scenario, and operation is executed at the recorded operation position, such as coordinates. However, in some cases, application software employing a GUI has a content change or a version upgrade that affects the GUI. For example, there are cases in which the character size for screen display is changed, or a window size for screen display is changed.
When changes occur in the character size for screen display, or changes occur in the window size, there are cases in which the operation position recorded in a scenario, by a mouse or the like, such as the coordinates on a screen, also move according to the character size or the window size. For example, when contents of application software are changed to give a larger or smaller character size than the character size when the scenario was recorded, the operation position, such as the coordinates on a screen, move according to the difference in character size before and after the change. When operation positions in the vicinity of the outline of a window are recorded in a scenario, and the contents of application software are changed to give a window smaller than the window when the scenario was recorded, sometimes coordinates outside of the window are present at the operation position.
Consequently, when the operation position of the target object during scenario generation changes, operation errors occur when the operation object is not present at the operation position during scenario generation, making continuation of operation according to the scenario difficult. When an operation error has occurred, the user regenerates the already generated scenario, or corrects the already generated scenario, leading to a drop in the efficiency of operations performed using the scenario.
However, from a display image during an input operation, such as by a mouse, using technology to generate a scenario with an image of a range including the operation position as an object of the operation target, enables the object to be identified from the image of the range including the operation position. However, due to executing operation on the target object based on the recorded operation position, if the operation position at scenario execution has moved, then in some cases operation errors occur that make continuation of operation according to the scenario difficult when the operation target is not present at the operation position, such as the coordinates. It is also difficult to identify the object recorded in a scenario in cases in which there are plural objects present in a single image. It is also difficult to identify the object in cases in which the object is present outside of the window. This results in regeneration of the already generated scenario, or correction of the already generated scenario, leading to a drop in the efficiency of operations performed using the scenario.
In cases in which the object of the operation target is character data, the characters displayed on the screen can be identified by searching for the character data. However, due to operation being executed based on the recorded operation position, an operation error occurs when the operation position at scenario execution has moved from the position at scenario generation, making continuation of operation according to the scenario difficult. In technology that records data representing user operation and an image of a screen during user operation, operation is executed based on the recorded operation position, and so operation errors occur when the operation position at the time of scenario execution has moved, making continuation of operation according to the scenario difficult.
An object of one aspect is, in cases in which operation on application software is executed by a computer, to easily identify an operation target at scenario reproduction.
Simple explanation follows here of a case in which user operation with respect to an object displayed on a screen of the display section 44 by the input section 46, such as a mouse, is recorded in a scenario.
In order to generate the scenario of the example in
In order to generate the scenario of the example in
To address this, in the present exemplary embodiment, a peripheral object to the target object is recorded in a scenario as a marker, without recording the position coordinates of the target object on the screen, and the target object is identified based on the marker.
Explanation next follows regarding operation of the present exemplary embodiment.
First the scenario device 10 records a scenario by operation by a user (step 200). Namely, at step 200, by executing the application software 32, the CPU 42 records in the test scenario 81 a user operation performed on the screen on which an object is being displayed by the display section 20. Then the scenario device 10 reproduces the GUI operation recorded in the scenario (step 202). Namely, the CPU 42 reproduces the user operation according to the recorded test scenario 81. The scenario device 10 then determines whether or not GUI installation is suitable by determining whether or not it has been possible to complete the user operation in the scenario. Namely, at step 204, the CPU 42 determines whether or not all user operations recorded in the scenario have been completed, and when affirmative determination is made, determines at step 206 that there is no problem with GUI installation. However, when negative determination is made at step 204, the CPU 42 determines at step 208 that there is a problem with the GUI installation.
The processing of step 200 of
In the display section 44, the position of display of the mouse cursor corresponds to movement of the mouse. The mouse 46M is equipped with a mouse button. A depressed state of the mouse button corresponds to mouse-down, and the state of the mouse button after returning from being depressed is a state corresponding to a mouse button state of mouse-up. Detection of the position of the mouse cursor and the mouse button state may employ an Application Program Interface (API) included as standard in the OS 56. It is possible to implement the mouse button state by hooking a mouse-down or mouse-up event. The user is able to perform a drag operation by using the mouse 46M. In the computer 40, determining whether or not a drag operation has been performed may be performed by computing a separation distance, such as a Euclidean separation distance, between coordinates of the mouse cursor position at mouse-button-down and at mouse-button-up, and determining drag has occurred when a specific threshold value has been exceeded.
Of various APIs, an API (GetCursorPos) that acquires the position of a mouse cursor is known as an example of a Microsoft Windows (registered trademark) API. An API (SendInput/mouse_event) that detects a mouse button event occurrence is known as an example of an API related to a mouse. An API (called a global hook) that acquires mouse button events is also known. An API (Bitbit) that acquires a screen shot is also known.
First, the CPU 42 executes each of the processes included in the operation recording program 60. More specifically, at step 210 and step 212, the CPU 42 monitors input by the mouse 46M and the keyboard 46K by executing the input monitoring process 61. More precisely, at step 210, the CPU 42 reads in the input values by the mouse 46M and the keyboard 46K. The CPU 42 makes negative determination and returns to step 210 until input (input by the user) is performed with the mouse 46M or the keyboard 46K. When input (input by the user) has been performed with the mouse 46M or the keyboard 46K, the CPU 42 makes affirmative determination at step 212, and then at step 214 determines whether or not the input operation was from the mouse 46M. The CPU 42 makes affirmative determination at step 214 when the input operation was from the mouse 46M, and transitions processing to step 218. However, the CPU 42 makes negative determination at step 214 when input operation was by the keyboard 46K, saves key data at step 216, and transitions processing to step 232.
Namely, at step 210, the CPU 42 detects a position of the mouse cursor and a mouse button event, or a keyboard event. Then, when a keyboard event has been detected (affirmative determination at step 212 and negative determination at step 214), the CPU 42 then saves the key data (step 216), and transitions processing to step 232. However, when a mouse button event has been detected (affirmative determination at step 212 and step 214), the CPU 42 transitions processing to step 218.
Then, at step 218, the CPU 42 executes the image capture process 62 and the image search process 66. More precisely, an image of the screen of the display section 44 is acquired by executing the image capture process 62, and, by executing the image search process 66, an image matching the image of a region including the operation position by the mouse 46M is searched for in the pre-recorded icons 82 saved in the file 80. Namely, at step 218, the CPU 42 determines whether or not the image containing the operation position by the mouse 46M is an image of an icon already recorded in the file 80 as a pre-recorded icon 82. Determination as to whether or not there is an already recorded icon may be performed by determining whether or not there is a pre-recorded icon image that matches an image of a specific size containing the position of the mouse cursor when the mouse 46M was operated.
When the image containing the operation position by the mouse 46M is a pre-recorded icon image, the CPU 42 makes an affirmative determination at step 218, and, at step 220, temporarily stores data representing the type and state of the icon. The CPU 42 then, at step 222, executes the OCR character string acquisition process 63, searches for a character string present in the vicinity of the image of the icon, temporarily stores character string data of the search result at the next step 226, and then transitions processing to step 230.
The search for a character string by execution of the OCR character string acquisition process 63 may be implemented by processing of known technology, such as by an Optical Character Reader (OCR). When the CPU 42 performs the character string search, preferably a first candidate is taken as the nearest character string in a direction determined by the type of icon. When there is no character string present in the direction determined by the type of icon, the CPU 42 preferably searches for the nearest character string in another direction, and repeats searching until a character string is found. The priority sequence of direction when searching for character strings is preferably a sequence determined for each of the types of icon.
However, when the image containing the operation position by the mouse 46M is not one of the pre-recorded icon images, the CPU 42 makes a negative determination at step 218, and then, at step 224, determines whether or not the image containing the operation position by the mouse 46M contains a character string. When the image containing the operation position by the mouse 46M contains a character string, the CPU 42 makes an affirmative determination at step 224, and transitions processing to step 226. However, when the image containing the operation position by the mouse 46M does not contain a character string, then the CPU 42 makes a negative determination at step 224, temporarily stores the image containing the operation position by the mouse 46M at step 228, and transitions processing to step 230.
For example, when a check box is clicked by the mouse 46M, it is conceivable that the state of the check box transitions in a pattern of from the ON state to the OFF state, or transitions in a pattern of from the OFF state to the ON state. Thus, both the icon ON state (the check box image 84) and the icon OFF state (the check box image 85) are recorded, and these are then searched for in the screen by executing the image search process 66. When the screen search result is that the coordinates (click coordinates) when the mouse 46M was operated are present within a rectangular region occupied by a check box image, then an operation of a check box click is recorded (step 220). In the recording at step 220, the type and the state of the pre-recorded icon 82 is also recorded. In the example of
When the image containing the operation position by the mouse 46M is one of a pre-recorded icon, a search is made in the vicinity for a character string (step 222). In the check box example illustrated in
Then, at step 230 illustrated in
Then at step 232, the CPU 42 generates a scenario (described in detail below) by outputting a recorded scenario of operation data containing the temporarily stored data representing the character string or image (the peripheral object), and data representing the direction from the target object.
The CPU 42 then, at step 234, determines whether or not recording-stop has been requested. The CPU 42 returns to step 210 when negative determination is made, and ends the current processing routine when affirmative determination is made. An example of a recording-stop request may be detection by using a particular keyboard event with a low usage likelihood (for example, Ctrl+Alt+Q, or the like).
Detailed explanation follows regarding processing of step 232 illustrated in
First, the CPU 42 executes the scenario output process 65 at step 232 of
The CPU 42 then, at step 250, determines whether or not image data is contained in the stored data. The stored data employed at step 250 encompasses key data saved at step 216 (
When image data is not contained in the stored data, the CPU 42 makes negative determination at step 250, and transitions processing to step 254. When there is image data contained in the stored data, the CPU 42 makes affirmative determination at step 250, appends a unique file name to the image data contained in the stored data at step 252, and then records it in the file 80 (
The second row of the test scenario 81 illustrated in
The test scenario 81 illustrated in
The processing of step 210 executed by the CPU 42 is an example corresponding to processing of a first detection section when the computer 40 is operating as a scenario generation device. The processing of step 230 executed by the CPU 42 is an example corresponding to processing of a second detection section when the computer 40 is operating as a scenario generation device. The processing executed at step 232 by the CPU 42 is an example corresponding to processing of a generation section when the computer 40 is operating as a scenario generation device.
Explanation next follows regarding processing of automatic operation executed according to the test scenario 81.
First the CPU 42 executes each of the processes contained in the automatic manipulation program 68. More specifically, by step 300, the CPU 42 reads in the test scenario 81 by executing the scenario reading process 69. The CPU 42 then, at step 302, determines whether or not there is data representing a command to operate the mouse 46M recorded in the test scenario 81. When a command stored in the test scenario 81 is data representing an operation of the keyboard 46K, the CPU 42 makes negative determination at step 302, and, at step 304, generates the key event recorded in the test scenario 81 and transitions to processing of step 334. The CPU 42 performs the key event executed at step 304 as an emulation of a keyboard event, by executing the event emulation process 75.
When the command stored in the test scenario 81 is data representing operation of the mouse 46M, the CPU 42 makes affirmative determination at step 302, and, at step 306, acquires an image of the screen of the operation target (screen capture). The CPU 42 then, at step 308, determines whether or not there is an image representing a scroll target region present in the acquired image.
Namely, in order to estimate a scroll target region, the CPU 42 searches for a scroll bar in the image of the screen of the acquired operation target, and when a scroll bar is found, the CPU 42 identifies the handle position for scrolling in the image within the window. More specifically, the CPU 42 acquires the image of the window by executing the image capture process 76 (acquires a bit map image by taking a screen shot). The CPU 42 then executes the scroll bar detection process 71.
Icons present at both ends of a region occupied by a scroll bar (a top arrow icon and a bottom arrow icon pair, or a left arrow icon and a right arrow icon pair) are pre-recorded as the pre-recorded icons 82 of the file 80. In the scroll bar detection process 71, when the icons present at the two ends of a region occupied by a scroll bar are present at positions on the same straight line, the region between the recorded icons is determined to be a scroll bar region. When a graspable portion (a portion to the inside of buttons) is a vertical scroll bar, it is conceivable that in the image within a scroll target region, a handle for scrolling has a 1 pixel width vertical long region all of the same color, or density, relative to x coordinates of a given movable portion. The position of the handle boundary line can be identified due to the loss of continuity at the boundary line between the movable portion and the handle portion.
The CPU 42 makes an affirmative determination at step 308 when there is no image representing a scroll target region present in the acquired image, and processing transitions to step 314. However, when an image representing a scroll target region is present in the acquired image, the CPU 42 makes a negative determination at step 308, and, at step 310, identifies a rectangular region in the screen as the scroll target by executing the scroll region detection process 77. The CPU 42 then, at step 312, generates a stitched image representing the entire scroll region by executing the stitched image generation process 73. After completing generation of the stitched image, the CPU 42 then transitions to processing of step 314.
The processing of step 310 is capable of identifying a rectangular region in the screen as the scroll target from the scroll bar region in the acquired image.
In the first processing, the CPU 42 moves a handle 146 of a vertical scroll bar to the top end in a vertical scroll bar region 148. In the second processing, the CPU 42 acquires an image of the screen (screen shot). In the third processing, the CPU 42 depresses a lower button 150 in the vertical scroll bar region 148. In the fourth processing, the CPU 42 once again acquires an image of the screen (screen shot). In the fifth processing, the CPU 42 extracts all the changed points in the screen, and derives a changed region 152 containing all of the changed points and a non-changed region 158. In the sixth processing, the CPU 42 derives an x coordinate of a left edge 154 in the changed region 152, so as to determine the left edge of a rectangular region. In the seventh processing, the CPU 42 derives the right edge, top edge and bottom edge of the changed region 152. These can be derived from out of the changed region 152 derived by the fifth processing. In the eighth processing, the CPU 42 depresses a top button 156 in the vertical scroll bar region 148, and returns the handle 146 to its uppermost position, storing the position of the handle as the uppermost portion. The CPU 42 is able to estimate a scroll target region 160 (the scroll target region 144) by executing the processing of the first processing to the eighth processing.
The main data in the screen has an upper portion with high frequency and many changed points, and so although the handle 146 is moved to the top end in the first processing, this is not limitation to moving the handle 146 to the top end. The number of times depressing is performed in the third processing is also not limited to one time, and precision is improved by repeating this plural times, while making a tradeoff with processing speed. When depressing has been performed plural times in the third processing, the number of times to return the position of the handle 146 to the uppermost position is the same number of times in the eighth processing. However, when the CPU 42 executes the first processing to the eighth processing, even if there is, for example, a scroll target region as internal processing, when an image of the same color and same density continues at a left end of the region, the left end of the region is not treated as being a scroll target region. However, since it can be considered that a location where an image of the same color and density continues is not an operation target or an object to confirm, this does not impede operation.
Moreover, in the above, explanation has been given of a case in which the scroll target region is estimated when there is only the vertical scroll bar region 140 present as a scroll bar region in the screen, but similar estimation may be performed when there is only the horizontal scroll bar region 142 present.
Explanation next follows regarding processing of the stitched image generation process 73 executed by the CPU 42 at step 312. The processing that the CPU 42 executes at step 312 is broadly categorized into image acquisition processing by screen division, and image stitching processing to stitch plural acquired images together into a stitched image.
In the image acquisition processing by screen division, the CPU 42 determines the number of screen divisions for acquiring the image, and then acquires an image within a window each time the handle of the scroll bar is drag operated (screen shot acquisition).
At the first procedure, the CPU 42 moves the handle 146 to the top end (right end) and determines the coordinates of mouse down. The coordinates of mouse down may be anywhere within the region of the handle 146. The CPU 42 then, in the second procedure, determines an individual number n of stop points of the handle 146 on the scroll bar region 140 (the number of screen divisions in the vertical direction or the horizontal direction). The CPU 42 then, in the third procedure, derives a pixel number of a difference (separation distance Y2) between the size (distance Y1) of the handle 146 of the scroll bar, and the size of the vertical scroll bar region 148 that is a movable range, and divides by the individual number n. At the fourth procedure, the CPU 42 then derives the coordinates for each of the stop points in a mouse move by, for each stop point, adding integer times (0 times, 1 times, 2 times, and so on) the numerical value (the quotient) derived at the third procedure to the y coordinate (or the x coordinate) determined at the first procedure. Then, since the handle 146 moves by the mouse move difference value, at the fifth procedure, the CPU 42 takes the location for mouse down from the second time onwards as the position of the moved distance added to the coordinates of the first procedure.
Note that when determining the individual number n in the second procedure, in order to overlap regions of the same image for stitching together in image stitching processing, described later, preferably at least a portion of the images (screen shots) acquired overlap to make a screen grid. It is possible to obtain a stitched image with high precision by making the individual number n a large value. However, since the time taken to acquire the image is greater the larger the individual number n is, the individual number n is determined as a tradeoff. As an example of a determining method, there is a method of “dividing the size of the movable range by the size of the handle, and rounding up any decimal places”. An example of such a determining method, is a method preferably based on a case in which a scroll bar is installed with the ratio of the length of the handle 146 to the size of the scroll region as the ratio of the length of the vertical scroll bar region 148, that is the movable range, to the size of the entire screen.
A drag operation may be substituted by operation to click a button of the direction to be scrolled in regions of the movable range (the vertical scroll bar region 148) other than the handle 146. However, when using an operation of clicking the button in the direction to be scrolled, the scroll bar is re-detected every click operation in a similar manner to the processing of steps 306 and 308, to re-confirm the position of the handle 146.
If the individual number n is determined as value a for a vertical scroll bar, and the individual number n is determined as value b for a horizontal scroll bar, then the number of times to acquire an image (screen shot) is the multiple value (a·b) of value a and value b.
Explanation next follows regarding image stitching processing to connect together plural acquired images into a stitched image. In image stitching processing, in order to connect together plural acquired images into a stitched image, the CPU 42 connects together an image of an overlapping region of the maximum length match in bit map images. In order to stitch together images with the maximum length overlapping region, the portion determined to be the overlapping region is removed from one or other of the images, and they are then stitched together.
The overlapping pixel number derived by the image stitching processing is preferably temporarily recorded to enable it to be employed in subsequently described processing (for example, the processing of step 332). The search for the stitching portion may be executed in parallel to the image acquisition processing. When it has not been possible to find an overlapping region in the stitching portion search, it is possible to increase the number of regions for connecting together in a stitched image by re-performing acquisition of images at slightly moved locations (screen shot re-capture).
The CPU 42 then, at step 314, searches for, and lists up, data representing a character string or an image specified in the current row of the test scenario 81, from a stitched image acquired at step 306, or from a stitched image of the entire scroll region stitched at step 312. Namely, at step 314, the character string specified in the current row of the test scenario 81 is extracted by performing character recognition processing using OCR, and all of the coordinates of character strings found by character recognition are listed up. At step 314, an image specified in the current row of the test scenario 81 is extracted by image matching processing, and all of the coordinates of extracted images are listed up. A degree of freedom may be applied to the character string, so as to pick up front portion matches, rear portion matches, and partial matches. The character strings or images listed up at step 314 are candidates for the target object.
Then, at step 315, the CPU 42 lists up data representing character strings described in marker data, or lists up data representing character strings described in positional relationship data during comparison determination. When an image has been specified by marker data, a search is made for the specified image, for example using OCR. The character strings or images listed up at step 315 are candidates for the peripheral object. The processing at step 314 and step 315 may be implemented by the CPU 42 executing the OCR specified character string search process 72 with search processing by OCR using a function for complete match search processing of images, or for fuzzy match search processing, and using the image search process 70.
Then, at step 316, the CPU 42 determines whether or not a candidate for the target object has been discovered. The determination processing of step 316 may be determination by the CPU 42 determining whether or not a character string or image has been listed up by the processing of steps 314 and 315. When it has not been possible to list up a character string or image, the CPU 42 makes negative determination at step 316, and, at step 318, determines that GUI installation is unsuitable, including the possibility that there is a problem with the installation of the GUI. However, when it has been possible to list up a character string or image, the CPU 42 makes affirmative determination at step 316, and transitions processing to step 320.
Then, at step 320, the CPU 42 determines whether or not it is possible to uniquely identify a candidate for a target object. In the determination processing of step 320, a condition representing the positional relationship recorded in the test scenario 81 is appended to the candidates for the target object (step 314) and the peripheral object candidate (step 315), and processing is implemented to reduce the number of candidates that fit for the target object. When there is a candidate for the target object that fits the condition, the CPI. 42 makes affirmative determination at step 320, and transitions processing to step 324. However, when there is no candidate for the target object that fits the conditions present, the CPU 42 makes a negative determination at step 320, and, at step 322, performs processing to notify the user with data representing that there is a high possibility of there being a problem in the test scenario 81, and then ends the current processing routine.
The condition applied to the objects may be a condition for comparison determination in place of conditions representing positional relationships. It is difficult to generate the test scenario 81 by appending only conditions for comparison determination by executing the manipulation recording program 60. In order to execute more precise testing using the test scenario 81, a check point or the like may be added to the test scenario 81 manually by the user, and then automatic operation may be executed.
The CPU 42 may detect the positional relationship between objects by executing the position relationship detection process 74. For example, determination may be performed as to whether or not there is a fit to a condition by employing coordinate values in a rectangular coordinate system related to the positional relationship as input, employing the results of a specific computation equation as output, and then determining whether or not the output values are a predetermined threshold value or lower. The data representing conditions appended to the objects may be embedded in program code, or recorded as condition data, such as in a database in an external recording device, and then read in.
When no degrees of freedom are given when listing up candidates for the target object at step 314, there is a high possibility that a single character string is the target object when there is only a single character string of candidates for the target object in the stitched image. Giving a degree of freedom refers to imparting a condition, such as having a front match, having a rear match, or having a partial match, to the character string. However, when there is not only a single character string of candidates for the target object present in the stitched image, the plural character strings are then reduced by using the conditions. The condition may be a combination of plural conditions, but preferably, reduction is eventually made to a single condition. In cases in which there is no reduction to a single candidate (such as when plural individual candidates remain, or when there are no candidates meeting the condition), preferably, at step 322, information such as a warning is notified to the user, the present routine is ended, and the test scenario 81 is manually changed by the user.
However, in cases in which candidates for the target object are listed up at step 314, and a search is performed while imparting a degree of freedom to the character string, such as having a front match, having a rear match, or having a partial match, it is difficult to uniquely identify a candidate as the target object. In cases in which it is difficult to uniquely identify a candidate as the target object, the processing of following step 324 onwards may be implemented for each of all of the listed up candidates for the target object.
The CPU 42 then, at step 324, determines whether or not a target object of the operation target recorded in the test scenario 81, namely a target object that is the operation target for the mouse 46M, matches the candidate of the target object identified at step 320. The CPU 42 makes negative determination at step 324 when, for example, a target object of an operation target is a pre-recorded icon or the like, and, at step 326, all images are listed up of search results (images such as icons) from image searching for the target object of the operation target. The CPU 42 then, at step 326, determines whether or not it has been possible to find a candidate for the target object, and when negative determination is made, determines at step 318 that the GUI is unsuitable, and ends the current processing routine. However, when the CPU 42 makes affirmative determination at step 328, the CPU 42 transitions processing to step 330. At step 330, the CPU 42 uniquely identifies the candidates for the target object, and then transitions processing to step 332.
In the processing of steps 324 to 330, when the target object of the operation target is a pre-recorded icon, the CPU 42 lists up the candidates for the target object by executing the image search process 70 with a complete match search function, and a fuzzy search function employing a threshold value. When the target object of the operation target is a character string input region, the CPU 42 lists up a rectangular region by executing the rectangular region search process 78, since the length, height, and the like thereof are variable. The listing up of the rectangular region may be performed by detecting straight lines, such as by using known technology of a Hough transform, and then, after constricting the straight lines to vertical and horizontal straight lines, a method is employed, such as determining a rectangular region based on the shape of a closed region.
The CPU 42 then, at step 332, executes an emulation of actual operation (detailed explanation follows) by executing the event emulation process 75. Then, the CPU 42 determines whether or not the test scenario 81 has progressed to the final row, affirmative determination is made at step 334 when the test scenario 81 is at the final row, and operation is ended with determination at step 336 that the there is no problem with installation of the GUI, and that the installation of the GUI is suitable. However, when the final row has not yet been reached in the test scenario 81, the CPU 42 makes negative determination at step 334, and returns to the processing of step 300.
Note that the CPU 42, at step 330, appends a priority ranking to the positional relationship to all the character strings and images found, and executes processing to reduce the target objects to those with high priority ranking.
Explanation next follows regarding definition of the priority ranking consideration at step 330. For a character string, the likelihoods of locations for placement of an object differ depending on the type of object. The priority ranking is therefore defined by classification for each of the types of object. Explanation next follows regarding an example of definition of a priority ranking classified and defined for each of the types of object, in a first definition to a third definition.
Explanation follows regarding a priority ranking definition of check boxes (
Explanation follows regarding a priority ranking definition of textboxes (
Explanation follows regarding priority ranking definition of other objects as the third definition. Other objects are defined similarly to in the first definition and the second definition. For example, a radio button (
The first definition to the third definition have been explained as examples of definitions of priority ranking, but there is no limitation to the first definition to the third definition.
Explanation next follows regarding an example of condition definition. The condition definition example is an example of a coordinate condition definition. The condition definition example contains, for example, a relationship definition and a separation distance definition. The relationship definition contains, for example, a determination by direction, a determination as a table, a mesh coordinate determination, and another determination. Explanation of each follows.
An example of other determinations in relationship definitions is to compute the occupied surface area of a candidate character string, and to determine “relationship established” for the largest candidate character string for an object subject to determination. It should be noted that only the candidate character string that is furthest up, or the furthest to the left, may be appended with the condition “relationship established”.
Detailed explanation next follows regarding processing of step 332 illustrated in
First, the image width and image height of the scroll region is repeatedly subtracted from the image width and image height of the stitched image until the coordinates of the position where the target object is displayed is reached, and the number of times repeated is derived as the scroll number of times. The number of pixels of the overlap derived by the image stitching processing may also be subtracted (step 312).
More precisely, the CPU 42 substitutes values for variables (Xwhole, Xscroll, Xorder) at step 340 of
Then, at step 348, the CPU 42 substitutes values into variables (Ywhole, Yscroll, Yorder). The image height in the stitched image up to the coordinates of the position where the target object is displayed is substituted in the variable Ywhole. The image height of the scroll region is substituted in the variable Yscroll. The number of times of repeat calculation which is the scroll number of times on the image in the Y axis direction (initial value “0”) is substituted in the variable Yorder. Then the CPU 42 repeats processing (step 352) that subtracts the variable Yscroll from the variable Ywhole until the value of variable Ywhole is a value of variable Yscroll or lower (affirmative determination at step 350), to derive the scroll number of times in the Y axis direction (the value of the variable Yorder). The final value of the variable Ywhole in the subtraction processing corresponds to the Y coordinate where the target object is displayed in the scroll region, and so is substituted into a variable Ycoord (step 354).
The CPU 42 then, at step 356, converts the coordinates of the target object from the coordinates of the stitched image to screen coordinates at which the image is actually being displayed. Namely, the variables Xcoord, Ycoord which are the final values of the variables Xwhole, Ywhole from the subtraction processing are the X coordinate and Y coordinate of the target object in the scroll region, with the top left corner of the scroll region as the origin. This thereby enables the actual operation screen coordinates to be derived by adding the variables Xcoord, Ycoord to the screen coordinates of the top left corner of the scroll region.
The CPU 42 then, at step 358, moves the scroll region to the position where the target object of an operation target is displayed. Namely, the CPU 42 performs processing similar to the processing of step 312 illustrated in
Then, at step 360, the CPU 42 performs an operation at the screen coordinate derived at step 356. For example, by executing the event emulation process 75, the CPU 42 performs an operation using an API or the like coupled to a screen event generation of the OS 56. Namely, the CPU 42 applies the screen coordinates derived at step 356 as parameters in the API, and executes operation at the screen coordinates.
The operation with respect to the target object of an operation target depends on the type of the target object of an operation target, such as a combination box (
As explained above, the present exemplary embodiment detects the target object of the operation target and the user operation when GUI operation using application software is recorded in a scenario. Peripheral objects to the target object of the operation target, and the positional relationships on the screen of the peripheral objects to the target object are also detected. A scenario is generated that includes detected operation data including the user operation, the target object, the peripheral objects, and data representing the positional relationship between the target object and the peripheral objects, and the generated scenario is recorded in the test scenario 81. This thereby enables the target object and the user operation to be identified using the test scenario 81, from the data representing the target object and the user operation, the data representing the peripheral objects, and the data representing the positional relationship between the target object and the peripheral objects. Thus, when operation in application software is executed on a computer using the scenario, the position of the target object can still be identified even when, for example, the size of the characters has been made larger after the scenario was generated. This thereby enables the position of the target object to be identified even when, for example, part of the scenario has been modified, enabling the efficiency of work using scenarios to be improved.
In the present exemplary embodiment, image data representing objects may, for example, be stored in advance as an icon, enabling the target object to be easily identified by comparing an image region containing the operation position at the time of user operation against pre-recorded icons.
The present exemplary embodiment also employs character strings on images as objects, thereby enabling target objects to be easily identified. Executing image recognition processing, such as OCR, on an image acquired from a screen, enables a character string to be easily detected from a screen.
The present exemplary embodiment detects, and generates a scenario with, data representing directions from peripheral objects toward the target object as data representing positional relationships. The direction from a peripheral object toward the target object can thereby be identified from an image when an operation was executed by using the test scenario 81, enabling a target object to be readily identified.
The present exemplary embodiment enables plural peripheral objects to be associated with the target object. Data representing a priority ranking according to the separation distances between the target object and the peripheral objects is associated with the relationship between the target object and each of the plural peripheral objects. For example, data representing a priority ranking according to the separation distances between the target object and the peripheral objects is appended when plural peripheral objects are detected for a target object. This thereby enables the target object to be quickly identified in accordance with a priority ranking based on the separation distance relationships between the target object and the peripheral objects.
In place of the data representing the user operation, the data representing the target object and the peripheral object, and the data representing the positional relationship between the target object and the peripheral object detected using a computer, the present exemplary embodiment may acquire the data by reading-in data input by the input section 46. Namely, it is possible to generate a scenario by reading-in data input by the input section 46 as data representing the user operation, the target object, and the peripheral object, and as data representing the positional relationship between the target object and the peripheral object. Acquiring the data by reading in the data input by the input section 46 in this manner enables an increase in the degrees of freedom for data detection.
In the present exemplary embodiment, a scenario in which operation data is recorded is read in, then when the application software is executed, image data of the screen is acquired, the position of the target object is identified from the image data, and the user operation recorded in the scenario is executed. The target object can be identified before executing the user operation recorded in the scenario, enabling unsuitable operation states during scenario execution to be suppressed.
In the present exemplary embodiment, when a portion of an image region of the operation target is displayed on the screen by a window, plural image data is acquired so as to make a grid of image regions of the operation target, and stitched image data is generated representing a stitched image of the plural acquired image data stitched together. This thereby enables the position of the target object to be identified from the stitched image data that is the grid of image regions of the operation target, even when the target object is present outside of the window view and not in a visible state.
When identifying the position of the target object, the present exemplary embodiment is capable of determining that the read-in scenario is not suitable for execution on the application software when the position of the target object cannot be identified in the image of the user operable region. This thereby enables early discovery of an unsuitable operation state during scenario execution, based on the scenario determination result, without continuing with processing to identify the target object, enabling operation execution time during scenario execution to be suppressed from being extended in time.
Note that the processing of step 300 executed by the CPU 42 is an example corresponding to processing of a scenario read-in section in cases in which the computer 40 operates as the scenario execution device. The processing executed by the CPU 42 at steps 306 to 312 is an example corresponding to processing of an acquisition section in cases in which the computer 40 operates as the scenario execution device. Moreover, the processing executed by the CPU 42 at steps 314 to 330 is an example corresponding to processing of an identification section in cases in which the computer 40 operates as the scenario execution device. The processing executed by the CPU 42 at step 332 is an example corresponding to processing of an execution section in cases in which the computer 40 operates as the scenario execution device
Second Exemplary EmbodimentExplanation next follows regarding a second exemplary embodiment. In the first exemplary embodiment, an example has been explained of a case in which the scenario device 10 is implemented by the computer 40, and user operation with a GUI is tested by the computer 40 as a local machine. In the second exemplary embodiment, an example will be explained of a case in which operation of application software 32 executed on an external computer is tested with a GUI by the computer 40 as a local machine. The second exemplary embodiment is application of technology disclosed herein to a World Wide Web (WEB) application program as an example of the application software 32. The second exemplary embodiment is configured similarly to the first exemplary embodiment, and so the same reference numerals are appended to similar parts, and detailed explanation thereof is omitted.
The computer 40 functioning as a WEB client, similarly to in the first exemplary embodiment (
In the present exemplary embodiment, a chain of operations is executed in the computer 40 by execution of the WEB browser program 422. The WEB browser program 422 displays a result of communication with the external computer 404 functioning as a WEB server, and the test target is the WEB server program 414 executed by the external computer 404. A window region of a WEB browser is displayed on the display section 44 of the computer 40 by executing the WEB browser program 422, and data is displayed by the WEB server program 414 in the window region of the WEB browser.
As explained above, the second exemplary embodiment enables operation of the application software 32 executed by an external computer to be tested with the GUI by the computer 40. This thereby enables a device for testing a scenario and the execution target program to be set externally to the computer 40. Thus, in addition to the advantageous effects of the first exemplary embodiment, the second exemplary embodiment also exhibits the advantageous effect of improving the degrees of freedom regarding the application location for the target device and for the execution target program.
Third Exemplary EmbodimentExplanation next follows regarding a third exemplary embodiment. In the third exemplary embodiment, explanation follows regarding an example in which operation of the application software 32 executed on an external computer is tested by a computer 40 having a remote desktop (RDT) function. The third exemplary embodiment is configured similarly to the first exemplary embodiment and the second exemplary embodiment, and so the same reference numerals are appended to similar parts, and detailed explanation thereof is omitted.
In the present exemplary embodiment, the computer 40 employs the RDT function through the computer network 402, enabling the remote operation of the external computer 404. This thereby enables the application software 32 executed by the external computer 404 to be tested by an image displayed on the display section 44 of the computer 40.
Fourth Exemplary EmbodimentExplanation next follows regarding a fourth exemplary embodiment. In the fourth exemplary embodiment, an example will be explained of a case in which operation of application software 32 executed by an external computer is tested with a GUI by a computer 40 acting as a local machine. The fourth exemplary embodiment is a case in which the technology disclosed herein is applied to a mobile terminal as an example of an external computer 404. The fourth exemplary embodiment is configured similarly to the first exemplary embodiment through the third exemplary embodiment, and so the same reference numerals are appended to similar parts, and detailed explanation thereof is omitted.
An Android terminal installed with a software execution environment called Android may be applied as an example of the external computer 404 functioning as a mobile terminal. In an Android terminal, software called MonkeyRunner is employed in the Android terminal to perform an Android Debug Bridge (adb) connection, enabling screen capture and terminal operation. MonkeyRunner is an example of an API for operating the Android terminal and an Android emulator from outside Android code.
In the present exemplary embodiment, a display image of the external computer 404 is acquired on the external computer 404 side, and processing is executed to instruct operation to the external computer 404 side. For example, processing to capture a screen is possible in a window region on the display section 44 of the computer 40. However, the processing load on the computer 40 side can be suppressed in comparison to processing to capture a screen on the display section 44 of the computer 40. Namely, in the external computer 404, processing is executed to directly acquire an image of a screen (screen capture image), and a screen operation API executes processing for direct transmission to the external computer 404 side.
As explained above, the fourth exemplary embodiment enables testing of operation of the application software 32 executed by the external computer 404 functioning as a mobile terminal using the computer 40 with a GUI. This thereby enables computer processing load to be reduced in cases in which the scenario testing is performed in the external computer 404 functioning as a mobile terminal.
Explanation has been given above of examples in which the scenario device 10 is implemented by the computer 40. However, there is no limitation to such a configuration, and various improvements and modifications may be implemented within a range not departing from the spirit as explained above.
Although explanation has been given above of a mode in which a program is pre-stored (installed) in a storage section, there is no limitation thereto. For example, the program of the technology disclosed herein may be provided in a form recorded on a recording medium, such as a CD-ROM or a DVD-ROM.
An aspect enables easy identification of the operation target at the time of scenario reproduction in cases in which operation on application software is executed by a computer.
All examples and conditional language provided herein are intended for the pedagogical purposes of aiding the reader in understanding the invention and the concepts contributed by the inventor to further the art, and are not to be construed as limitations to such specifically recited examples and conditions, nor does the organization of such examples in the specification relate to a showing of the superiority and inferiority of the invention. Although one or more embodiments of the present invention have been described in detail, it should be understood that various changes, substitutions, and alterations could be made hereto without departing from the spirit and scope of the invention.
REFERENCE NUMERALS
- 10 scenario device
- 10a scenario generation device
- 10b scenario execution device
- 12 cpu
- 14 memory
- 18 input section
- 20 display section
- 22 operation recording section
- 24 automatic operation section
- 26 user interface application section
- 28 operation recording program
- 30 automatic operation program
- 32 application software
- 34 file
- 44 display section
- 46 input section
- 46k keyboard
- 46m mouse
- 49 recording medium
- 50 memory
- 52 storage section
- 58 GUI application program
- 60 operation recording program
- 68 automatic operation program
- 80 file
- 81 test scenario
Claims
1. A scenario execution device comprising:
- a processor, and
- a memory storing instructions, which when executed by the processor perform a procedure, the procedure including:
- reading-in a scenario including data representing a target object of an operation target displayed as an image on a screen by application software operating on a computer, data representing a user operation on the target object, data representing a peripheral object that is a detected image as an image positioned around the target object, and data representing a positional relationship on the screen of the peripheral object to the target object;
- detecting whether the scenario includes data representing a command to operate a mouse;
- in a case that the scenario includes data representing the command to operate the mouse, acquiring data representing an image of a region on the screen that is operable by the user operation;
- detecting whether the acquired image includes an image representing a scroll target region;
- in a case that the acquired image includes the image representing a scroll target region, generating image data representing an image of an entire scroll region by executing an image data generation process;
- identifying a position of the target object in the region operable by the user operation based on the data representing the target object, the data representing the peripheral object, and the data representing the positional relationship that are contained in the read-in scenario, and based on the data representing the image of the user operable region; and
- executing the operation on the target object at the identified position based on the data representing the user operation on the target object contained in the scenario.
2. The scenario execution device of claim 1, wherein the image data generation process includes
- acquiring, as the data representing an image of a region operable by the user operation, a plurality of data each representing an image of a display region on the screen at the time of execution of the application software; and
- generating data representing a stitched image of a user operation region operable by the user operation based on the acquired data representing the plurality of images.
3. The scenario execution device of claim 1, wherein at least a portion of the target object or a portion of the peripheral object is a character string.
4. The scenario execution device of claim 1, wherein the procedure further includes:
- reading-in a scenario in which data representing a priority ranking in accordance with a separation distance between the target object and the peripheral object is associated with each of the data representing the positional relationships between the target object and a plurality of peripheral objects; and
- identifying a position of the target object based on the data representing the priority ranking that was read-in.
5. The scenario execution device of claim 1, wherein the procedure further includes:
- determining that the scenario is not suitable for execution on the application software in a case in which a position of the target object cannot be identified.
6. The scenario execution device of claim 1, wherein the procedure further includes:
- identifying a candidate for the target object; and
- in a case that the identified candidate does not match the target object, identifying an image representing an icon as a candidate for the target object.
7. A non-transitory computer-readable recording medium having stored thereon a program for causing a computer to execute a scenario execution process, the process comprising:
- reading-in a scenario including data representing a target object of an operation target displayed as an image on a screen by application software operating on a computer, data representing a user operation on the target object, data representing a peripheral object that is a detected image as an image positioned around the target object, and data representing a positional relationship on the screen of the peripheral object to the target object;
- detecting whether the scenario includes data representing a command to operate a mouse;
- in a case that the scenario includes data representing the command to operate the mouse, acquiring data representing an image of a region on the screen that is operable by the user operation;
- detecting whether the acquired image includes an image representing a scroll target region;
- in a case that the acquired image includes the image representing a scroll target region, generating image data representing an image of an entire scroll region by executing an image data generation process;
- identifying a position of the target object in the region operable by the user operation based on the data representing the target object, the data representing the peripheral object, and the data representing the positional relationship that are contained in the read-in scenario, and based on the data representing the image of the user operable region; and
- executing the operation on the target object at the identified position based on the data representing the user operation on the target object contained in the scenario.
8. The non-transitory computer-readable recording medium of claim 7, wherein the image data generation process includes
- acquiring, as the data representing an image of a region operable by the user operation, a plurality of data each representing an image of a display region on the screen at the time of execution of the application software; and
- generating data representing a stitched image of a user operation region operable by the user operation based on the acquired data representing the plurality of images.
9. The non-transitory computer-readable recording medium of claim 7, wherein at least a portion of the target object or a portion of the peripheral object is a character string.
10. The non-transitory computer-readable recording medium of claim 7, wherein the scenario execution process further includes:
- reading-in a scenario in which data representing a priority ranking in accordance with a separation distance between the target object and the peripheral object is associated with each of the data representing the positional relationships between the target object and a plurality of peripheral objects; and
- identifying a position of the target object based on the data representing the priority ranking that was read-in.
11. The non-transitory computer-readable recording medium of claim 7, wherein the scenario execution process further includes:
- determining that the scenario is not suitable for execution on the application software in a case in which a position of the target object cannot be identified.
12. The non-transitory computer-readable recording medium of claim 7, wherein the scenario execution process further comprises:
- identifying a candidate for the target object;
- in a case that the identified candidate does not match the target object, identifying an image representing an icon as a candidate for the target object.
13. A scenario execution method comprising:
- by a processor, reading-in a scenario including data representing a target object of an operation target displayed as an image on a screen by application software operating on a computer, data representing a user operation on the target object, data representing a peripheral object that is a detected image as an image positioned around the target object, and data representing a positional relationship on the screen of the peripheral object to the target object;
- detecting whether the scenario includes data representing a command to operate a mouse;
- in a case that the scenario includes data representing the command to operate the mouse, acquiring data representing an image of a region on the screen operable by the user operation;
- detecting whether the acquired image includes an image representing a scroll target region;
- in a case that the acquired image includes the image representing a scroll target region, generating image data representing an image of an entire scroll region by executing an image data generation process;
- identifying a position of the target object in the region operable by the user operation based on the data representing the target object, the data representing the peripheral object, and the data representing the positional relationship that are contained in the read-in scenario, and based on the data representing the image of the user operable region; and
- executing the operation on the target object at the identified position based on the data representing the user operation on the target object contained in the scenario.
14. The scenario execution method of claim 13, wherein the image data generation process includes
- acquiring, as the data representing an image of a region operable by the user operation, a plurality of data each representing an image of a display region on a screen at the time of application software execution; and
- generating data representing a stitched image of a user operation region operable by the user operation based on the acquired data representing the plurality of images.
15. The scenario execution method of claim 13, wherein at least a portion of the target object or a portion of the peripheral object is a character string.
16. The scenario execution method of claim 13, further comprising:
- reading-in a scenario in which data representing a priority ranking in accordance with a separation distance between the target object and the peripheral object is associated with each of the data representing the positional relationships between the target object and a plurality of peripheral objects; and
- identifying a position of the target object based on the data representing the priority ranking that was read-in.
17. The scenario execution method of claim 13, further comprising:
- determining that the scenario is not suitable for execution on the application software in a case in which a position of the target object cannot be identified.
18. The scenario execution method of claim 13, further comprising:
- identifying a candidate for the target object; and
- in a case that the identified candidate does not match the target object, identifying an image representing an icon as a candidate for the target object.
Type: Application
Filed: Jan 14, 2019
Publication Date: May 16, 2019
Applicant: FUJITSU LIMITED (Kawasaki-shi)
Inventors: Shingo Satou (Kawasaki), Toshihiro Morimoto (Atsugi), Yukitoshi Ishihara (Machida)
Application Number: 16/246,934