EMBEDDING USER SPECIFIC INFORMATION INTO USER SPECIFIC INFORMATION INPUT AREA OF DOCUMENT
An example image forming apparatus includes a user interface device, an image forming job operator, a processor, and a memory. The processor executes instructions to display a preview screen of a document through the user interface device, perform an embedding process of embedding user-specific information into a plurality of user-specific information input areas included in the document, display state information of the plurality of user-specific information input areas on the preview screen in synchronization with progress of the embedding process, and, when the embedding process is completed, perform an image forming job with respect to the document in which the user-specific information is embedded, through the image forming job operator.
A user interface device included in an image forming apparatus such as a printer, a photocopier, a fax machine, a multifunctional machine, or the like has been provided in various forms to enhance a user convenience. A user interface device of an image forming apparatus is being developed to provide a convenient and user-friendly user interface (UI) or user experience (UX) to a user.
Hereinafter, various examples will be described with reference to the drawings. Like reference numerals in the drawings denote like elements, and thus a repetitive description may be omitted.
Referring to
The image forming apparatus 100 may be connected to an external device and may transmit or receive information to/from the external device. The external device may be a computer, a Cloud based device, a server, another image forming apparatus, a mobile device such as a smartphone, or the like.
A user of the image forming apparatus 100 may access the image forming apparatus 100 and execute functions of the image forming apparatus 100. The user may input user account information to the image forming apparatus 100 to log in and use the image forming apparatus 100. The user may operate the image forming apparatus 100 by editing a document in a user interface screen provided by the image forming apparatus 100 or by setting options related to an image forming job.
When a document includes a contract, an agreement between users, or the like, an input of user-specific information may be required in various places in the document. In this case, the user may want to input the user-specific information in the various places in an easy and convenient manner such that any place where the user-specific information is required to be input is not missed. Hereinafter, example operations of collectively inputting user-specific information in various places in a document by using a user interface screen provided by an image forming apparatus will be described.
Referring to
The processor 110 may control an operation of the image forming apparatus 100 and may include at least one processing unit such as a central processing unit (CPU) or the like. The processor 110 may control other components included in the image forming apparatus 100 to perform an operation corresponding to a user input received through the user interface device 120. The processor 110 may respectively include a specialized processing unit corresponding to each function of the image forming apparatus 100, a single processing unit for processing all functions of the image forming apparatus 100, or a combination thereof. The processor 110 may execute a program stored in the memory 140, read data or files stored in the memory 140, or store new data in the memory 140.
The user interface device 120 may include an input unit and an output unit. In an example, the input unit may receive, from the user, an input for performing an image forming job and the output unit may display a result of performing the image forming job or information of a state of the image forming apparatus 100. For example, the user interface device 120 may be in the form of a touch screen including an operation panel to receive a user input and a display panel to display a screen.
The communication device 130 may perform wired or wireless communication with another device or a network. To this end, the communication device 130 may include a communication module (e.g., a transceiver) supporting at least one of various wired or wireless communication methods. The wireless communication may include, for example, wireless fidelity (Wi-Fi), Wi-Fi direct, Bluetooth, ultra wideband (UWB), near field communication (NFC), or the like. The wired communication may include, for example, Ethernet, universal serial bus (USB), high definition multimedia interface (HDMI), or the like.
The communication device 130 may be connected to an external device located outside the image forming apparatus 100 to transmit and receive signals or data. The communication device 130 may transmit signals or data received from the external device to the processor 110 or transmit signals or data generated by the processor 110 to the external device.
The memory 140 may store instructions executable by the processor 110. The memory 140 may store programs and files like applications corresponding to respective functions of the image forming apparatus 100. The memory 140 may store an operating system.
The image forming job operator 150 may perform an image forming job such as printing, copying, scanning, or faxing. The image forming job operator 150 may perform an image forming job according to a command received by a user input through the user interface device 120. The image forming job operator 150 may form an image on a recording medium by any of various printing methods such as an electrophotographic method, an inkjet method, a thermal transfer method, a direct thermal method, or the like, according to a printing function. The image forming job operator 150 may read a recorded image by irradiating light onto an original and receiving reflected light according to a scanning function. The image forming job operator 150 may scan an image and transmit a scan file to a destination or receive a file from an external source and print the received file, according to a faxing function.
The image forming apparatus 100 may use the user interface device 120 to communicate with a user such as by receiving a request from the user or providing information to the user. The image forming apparatus 100 may also communicate with the user by use of an external device such as a user terminal through the communication device 130.
The processor 110 may execute instructions stored in the memory 140 to display a preview screen of a document through the user interface device 120 and perform an embedding process of embedding user specific information into a plurality of user-specific information input areas included in the document. The processor 110 may execute instructions stored in the memory 140 to display state information of the plurality of user-specific information input areas on the preview screen in synchronization with a progress of the embedding process. The state information of the plurality of user-specific information input areas may be adaptively changed in synchronization with selection of the user-specific information input areas or an embedding of the user-specific information. The processor 110 may execute instructions stored in the memory 140 to perform an image forming job with respect to the document in which the user-specific information is embedded through the image forming job operator 150 when the embedding process is completed. The user-specific information may be stored for each user in the memory 140, such as during performance of the embedding process, or may be deleted from the memory 140 when the image forming job is finished.
The processor 110 may execute instructions stored in the memory 140 to receive an input of user-specific information from the user and store the input user-specific information. The processor 110 may execute instructions stored in the memory 140 to receive an input selecting the plurality of user-specific information input areas to be input with the user-specific information in the document displayed on the user interface device 120 and embed the stored user-specific information into the plurality of selected user-specific information input areas according to a request of the user.
The processor 110 may execute instructions stored in the memory 140 to receive an input selecting the plurality of user-specific information input areas to be input with the user-specific information in the document displayed on the user interface device 120. The processor 110 may execute instructions stored in the memory 140 to receive an input of user-specific information from the user and embed the received user-specific information into the plurality of selected user-specific information input areas according to a request of the user.
The processor 110 may execute instructions stored in the memory 140 to identify the plurality of user-specific information input areas in the document displayed on the user interface device 120 and receive an input selecting at least one user-specific information input area to be input with the user-specific information among the plurality of identified user-specific information input areas. The processor 110 may execute instructions stored in the memory 140 to receive an input of user-specific information from the user and embed the received user-specific information into at least one of the plurality of selected user-specific information input areas according to a request of the user.
The processor 110 may execute instructions stored in the memory 140 to edit the user-specific information embedded into the plurality of user-specific information input areas on the preview screen.
An example operation of an image forming apparatus will now be described. The above-mentioned contents with respect to the image forming apparatus 100 may be applied to an operation method of an image forming apparatus as the same even when the contents are omitted hereinafter. Alternatively, the contents of the operation method of the image forming apparatus may be applied to the image forming apparatus 100 as the same.
Referring to
In operation 320, the image forming apparatus 100 may perform an embedding process of embedding user specific information into a plurality of user-specific information input areas included in a document. The user-specific information may include information having different contents for each user such as a signature (e.g., a signed name, a graphic, etc.), a seal, a social registration number, or the like. In other examples, the user-specific information may include user personal information such as an email address, a phone number (e.g., a mobile phone number), or the like. The image forming apparatus 100 may designate the plurality of user-specific information input areas included in the document through selection by the user or automatic identification. As an example, an optical character recognition (OCR) or an intelligent character recognition (ICR) technology may be used to perform automatic identification of the plurality of user-specific information input areas. The image forming apparatus 100 may receive an input of user-specific information through the user interface device 120 or an external apparatus connected to the image forming apparatus 100. The user-specific information may be stored for each user in the memory 140 in the image forming apparatus during the embedding process. According to a request of the user, the received user-specific information may be collectively embedded into the plurality of designated user-specific information input areas.
The embedding process may further include an operation of editing the user-specific information embedded into the plurality of user-specific information input areas on the preview screen. For example, the user may perform editing of the user-specific information by actions such as deleting, color changing, position moving, size adjusting, or the like of the user-specific information in the image forming apparatus 100.
When embedding of the user-specific information of a same user is further required, the user may further designate the user-specific information input area through the image forming apparatus 100 to embed the user-specific information. On the other hand, when embedding of the user-specific information of another user is required, an embedding process including input of user-specific information, a designation of the user-specific information input area, and an embedding of the user-specific information into the user-specific information input area may be performed.
In operation 330, the image forming apparatus 100 may display state information of the plurality of user-specific information input areas on the preview screen in synchronization with a progress of the embedding process. The state information of the plurality of user-specific information input areas may be adaptively changed in synchronization with selection of the user-specific information input areas or an embedding of the user-specific information. The state information of the plurality of user-specific information input areas may be at least one of a total number of the plurality of user-specific information input areas, a number of the user-specific information input areas selected by the user, a number of user-specific information input areas into which the user-specific information is embedded, a number of user-specific information input areas into which the user-specific information is not embedded, or the like.
In operation 340, the image forming apparatus 100 may perform an image forming job with respect to a document into which the user-specific information is embedded when the embedding process is completed. The image forming apparatus 100 may determine that the embedding process is completed when the user-specific information input areas to be embedded with the user-specific information no longer exist or there is a user's termination request with respect to the embedding process. For example, in a case where the user-specific information is applied to all of the user-specific information input areas or the user-specific information is applied to some of the user-specific information input areas and the remaining user-specific information input areas are deselected, or in a case where all of the user-specific information input areas are deselected, the image forming apparatus 100 may determine that no user-specific information input area omitting the user-specific information exists (e.g., that all user-specific information input areas have been considered).
When the embedding process is completed, the image forming apparatus 100 may store or print the document into which the user-specific information is embedded. As an example, the image forming apparatus 100 may store the document in the memory 140 or in an external storage device. Also, the image forming apparatus 100 may transmit the document to an external apparatus through a service such as e-mail, File Transfer Protocol (FTP), Server Message Block (SMB), or the like. When the image forming job is attempted to be performed before the embedding process, the image forming apparatus 100 may display the user-specific information input areas into which the user-specific information is not embedded to the user and determine again whether to perform the image forming job. When a user's approval is received, the image forming apparatus 100 may perform the image forming job with respect to the document in which the user-specific information is omitted. When a user's approval is not received, the image forming apparatus 100 may display the user-specific information input areas in which the user-specific information is not embedded or otherwise induce the user to embed the user-specific information.
In an example, the user-specific information may be deleted from the memory 140 in the image forming apparatus 100 when the image forming job is completed.
Referring to
In operation 420, the image forming apparatus 100 may store the received user-specific information for each user. The user-specific information may be stored for each user as the user-specific information is being input, when the input of the user-specific information is completed, or the like.
In operation 430, the image forming apparatus 100 may receive an input selecting a plurality of user-specific information input areas capable of inputting the user-specific information in the displayed document. The image forming apparatus 100 may designate the plurality of user-specific information input areas included in the document according to the selection by the user. For example, the user may select the user-specific information input areas or release the selected user-specific information input areas by using a management menu, such as a floating menu, a sidebar menu, or the like on the preview screen displayed in the user interface device 120.
The image forming apparatus 100 may determine whether the same portion as the designated user-specific information input areas according to the selection by the user exists in other areas in the document. When the same portion as the designated user-specific information input areas exists in other areas in the document, the image forming apparatus 100 may display the corresponding areas to be identifiable or designate the corresponding areas together.
In operation 440, the image forming apparatus 100 may embed the user-specific information stored in the image forming apparatus 100 into the plurality of user-specific information input areas selected according to a request of the user. The user may confirm the plurality of user-specific information input areas selected on the preview screen provided in the user interface device 120 and select to apply the user-specific information to be embedded such that the user-specific information may be collectively applied to the plurality of user-specific information input areas. In an example, the selection to apply the user-specific information to be embedded may be made by pressing a button, touching an icon or menu item on a touch screen, or the like.
Referring to
The image forming apparatus 100 may determine whether a same portion as the user-specific information input areas designated according to the selection by the user exists in other areas in the document and display the corresponding areas to be identifiable or designate the corresponding areas together.
In operation 520, the image forming apparatus 100 may receive an input of user-specific information from the user. The image forming apparatus 100 may provide a user interface screen capable of inputting the user-specific information.
In operation 530, the image forming apparatus 100 may embed the received user-specific information into the plurality of user-specific information input areas selected according to the request of the user. For example, the user may confirm the user-specific information to embed into the plurality of user-specific information input areas selected on the preview screen selected in the user interface device 120 and select to apply the user-specific information such that the received user-specific information may be collectively applied to the plurality of user-specific information input areas. In an example, the selection to apply the user-specific information may be made by pressing a button, touching an icon or menu item on a touch screen, or the like.
Referring to
When an input of a signature for each user is completed, the image forming apparatus 100 may collectively apply the corresponding signatures to a plurality of signature input areas included in the document when an “Apply” button is pressed in a state in which a signature of a user is selected. When the plurality of signature input areas have different ranges, the signatures may be adjusted and applied to a range of each signature input area. In addition, when only corresponding locations are selected without specifying the ranges of the plurality of signature input areas, the signatures may be collectively applied with a fixed size.
Referring to
To this end, as shown in
The image forming apparatus 100 may further receive an input selecting other signature input areas and may further receive an input selecting the plurality of signature input areas when a “Select” button is pressed. In addition, the image forming apparatus 100 may release the selection of the corresponding signature input areas or delete the signatures input in the corresponding signature input areas when a “Delete” button is pressed after any signature input area is selected through the user interface screen through which the input selecting the signature input areas is received. The image forming apparatus 100 may release the selection of all the selected signature input areas when a “Delete All” button is selected. In a case where the selection of the signature input areas is completed, the image forming apparatus 100 may provide the user interface screen to manage signatures such that the signatures may be collectively applied to the selected signature input areas when a “Sign” button is selected.
Referring to
A case where the “Select” button, “Delete” button, “Delete All” button and “Sign” button included in the form of a sidebar menu in
Referring to
As illustrated in
Referring to
In operation 1020, the image forming apparatus 100 may receive an input selecting at least one user-specific information input area to be input with user-specific information among the plurality of identified user-specific information input areas. The image forming apparatus 100 may designate the plurality of user-specific information input areas included in the document according to the selection by the user.
In operation 1030, the image forming apparatus 100 may receive an input of user-specific information from the user. The image forming apparatus 100 may provide a user interface screen capable of receiving an input of the user-specific information.
In operation 1040, the image forming apparatus 100 may embed the received user-specific information into at least one selected user-specific information input area according to the request of the user. For example, the user may confirm the user-specific information to be embedded into the plurality of user-specific information input areas selected on the preview screen selected in the user interface device 120 and may select application of the user-specific information such that the received user-specific information may be collectively applied to the plurality of user-specific information input areas. In an example, the selection to apply the user-specific information to be embedded may be made by pressing a button, touching an icon or menu item on a touch screen, or the like.
Referring to
As illustrated in
A case where the “Select” button, “Delete” button, “Delete All” button and “Sign” button included in the form of a sidebar menu in
The above-mentioned examples of operating the image forming apparatus 100 may be implemented in the form of a non-transitory computer-readable storage medium storing instructions or data executable by a computer or a processor. The examples may be written as computer programs and may be implemented in general-use digital computers that execute the programs by using a non-transitory computer-readable storage medium. Examples of the non-transitory computer-readable storage medium include read-only memory (ROM), random-access memory (RAM), flash memory, CD-ROMs, CD-Rs, CD+Rs, CD-RWs, CD+RWs, DVD-ROMs, DVD-Rs, DVD+Rs, DVD-RWs, DVD+RWs, DVD-RAMs, BD-ROMs, BD-Rs, BD-R LTHs, BD-REs, magnetic tapes, floppy disks, magneto-optical data storage devices, optical data storage devices, hard disk, solid-status disk (SSD), and instructions or software, associated data, data files, and data structures, and any device capable of providing instructions or software, associated data, data files, and data structures to a processor or a computer such that the processor or computer may execute instructions.
Claims
1. An image forming apparatus comprising:
- a user interface device;
- an image forming job operator;
- a processor; and
- a memory storing instructions executable by the processor,
- wherein the processor executes the instructions to: display a preview screen of a document through the user interface device, perform an embedding process of embedding user-specific information into a plurality of user-specific information input areas included in the document, display state information of the plurality of user-specific information input areas on the preview screen in synchronization with a progress of the embedding process, and when the embedding process is completed, perform an image forming job with respect to the document in which the user-specific information is embedded, through the image forming job operator.
2. The image forming apparatus of claim 1, wherein the state information of the plurality of user-specific information input areas is adaptively changed in synchronization with selection of the user-specific information input areas or an embedding of the user-specific information.
3. The image forming apparatus of claim 1, wherein the processor executes the instructions to:
- receive, from a user, an input of the user-specific information,
- store the received user-specific information,
- receive an input selecting the plurality of user-specific information input areas into which the user-specific information is to be input, and
- embed the stored user-specific information into the plurality of selected user-specific information input areas according to a request of the user.
4. The image forming apparatus of claim 1, wherein the processor executes the instructions to:
- receive an input selecting the plurality of user-specific information input areas into which the user-specific information is to be input, and
- embed the received user-specific information into the plurality of selected user-specific information input areas according to a request of a user when an input of the user-specific information is received from the user.
5. The image forming apparatus of claim 1, wherein the processor executes the instructions to:
- identify the plurality of user-specific information input areas included in the document,
- receive an input selecting at least one user-specific information input area into which the user-specific information is to be input from among the plurality of identified user-specific information input areas,
- receive an input of the user-specific information from a user, and
- embed the received user-specific information into the at least one selected user-specific information input area according to a request of the user.
6. The image forming apparatus of claim 1, wherein the processor executes the instructions to edit, on the preview screen, the user-specific information embedded into the plurality of user-specific information input areas.
7. The image forming apparatus of claim 1, wherein the user-specific information is stored for each user in the memory during the embedding process, and, when the image forming job is completed, the user-specific information is deleted from the memory.
8. An operation method of an image forming apparatus, the operation method comprising:
- displaying a preview screen of a document in a user interface device of an image forming apparatus;
- performing an embedding process of embedding user-specific information into a plurality of user-specific information input areas included in the document;
- displaying state information of the plurality of user-specific information input areas on the preview screen in synchronization with a progress of the embedding process; and
- when the embedding process is completed, performing an image forming job with respect to the document in which the user-specific information is input.
9. The operation method of claim 8, wherein the state information of the plurality of user-specific information input areas is adaptively changed in synchronization with selection of the user-specific information input areas or an embedding of the user-specific information.
10. The operation method of claim 8, wherein the performing of the embedding process comprises:
- receiving an input of the user-specific information from the user;
- storing the received user-specific information for each user;
- receiving an input selecting the plurality of user-specific information input areas into which the user-specific information is to be input in the displayed document; and
- embedding the stored user-specific information into the plurality of selected user-specific information input areas according to a request of the user.
11. The operation method of claim 8, wherein the performing of the embedding process comprises:
- receiving an input selecting the plurality of user-specific information input areas into which the user-specific information is to be input;
- receiving an input of the user-specific information from a user; and
- embedding the received user-specific information into the plurality of selected user-specific information input areas according to a request of the user.
12. The operation method of claim 8, wherein the performing of the embedding process comprises:
- identifying the plurality of user-specific information input areas included in the document;
- receiving at least one user-specific information input area into which the user-specific information is to be input from among the plurality of identified user-specific information input areas;
- receiving an input of the user-specific information from a user; and
- embedding the received user-specific information into the at least one selected user-specific information input area according to a request of the user.
13. The operation method of claim 8, wherein the performing of the embedding process further comprises editing, on the preview screen, the user-specific information embedded into the plurality of user-specific information input areas.
14. The operation method of claim 8, further comprising:
- storing the user-specific information for each user in a memory during the embedding process; and
- when the image forming job is completed, deleting the user-specific information from the memory.
15. A non-transitory computer-readable storage medium storing instructions executable by a processor, the non-transitory computer-readable storage medium comprising:
- instructions to display a preview screen of a document in a user interface device of an image forming apparatus;
- instructions to perform an embedding process of embedding user-specific information into a plurality of user-specific information input areas included in the document;
- instructions to display state information of the plurality of user-specific information input areas on the preview screen in synchronization with a progress of the embedding process; and
- instructions to perform an image forming job with respect to the document in which the user-specific information is input, when the embedding process is completed.
Type: Application
Filed: Oct 18, 2019
Publication Date: Dec 2, 2021
Patent Grant number: 11523024
Inventors: Incheon PARK (Seongnam-si), Daehyun KIM (Seongnam-si), Jaein LEE (Seongnam-si)
Application Number: 17/281,643