Main Page | Modules | Alphabetical List | Compound List | File List | Compound Members | File Members | Related Pages | Search

OCR Shop XTR/API Sample Usage

Two sample programs are available to demonstrate how to use the OCR Shop XTR™/API:

vvxtrSample.cc demonstrates basic usage of the OCR Shop XTR/API. vvxtrSample2.cc demonstrates how to load image data from memory directly into the OCR engine with the function vvEngAPI::vvReadImageData(const struct vvxtrImage * img).

The instructions here and in Basics of Usage refer specifically to vvxtrSample.cc, although vvxtrSample2.cc may be compiled and used in much the same manner. vvxtrSample2.cc is not distributed with the API, but may be downloaded from this documentation and run with the same supporting files as included with vvxtrSample.cc in the OCR Shop XTR/API distribution.


After you have compiled the sample code (see section Basics of Usage), here is how to run the sample program vvxtrSample.cc and what to expect:

  1. Start the OCR Shop XTR™/API daemon in one shell:

    /opt/Vividata/bin/ocrxtrdaemon

    It should start and not do anything; we recommend you leave it in the foreground for debugging so you can watch the log and error messages.

  2. Run the sample program in another shell with the command:

    ./vvxtrSample localhost

  3. The sample program will prompt you for a file name; enter:

    letter.tif

  4. It will then prompt you to enter the page number; letter.tif only has one page, so enter:

    1

  5. The sample program runs preprocessing and recognition on the input image letter.tif, then asks you if you want to output a document. To write an output document, enter:

    y

  6. Now choose a format; we suggest initially writing an ASCII output document, so enter:

    a

  7. With the API and in the sample program, it is possible to acquire recognition output from the engine by asking the engine to write a file or by asking the engine to pass the output to the client application through memory. In this example, send the output to memory by entering:

    m

  8. The sample program writes the output text to the screen. You should see all of the text contained within the letter.tif image; you can open letter.tif in an image viewer if you would like to confirm the expected output.

  9. The sample program now asks if you would like to output an image. Enter:

    y

  10. Choose an output format. For this example, we suggest you choose JPEG by entering:

    j

  11. The sample program now lists all regions in the recognized image. The regions were created when the sample program called vvEngAPI::vvPreprocess, because the dm_pp_auto_segment value was turned on. Enter any region number listed to output that region as an image; for example, enter:

    3

  12. As with document output, subimage output can be acquired from the engine by sending the data directly to a file or by having the engine pass the image data through memory. For example, choose to have the output written directly to a file by entering:

    f

  13. You can write out other regions as images if you like, or to finish, enter:

    n

  14. The sample program should end, with the last message:

    ******* Ending engine instance.

  15. You may view the region that you wrote as an image file by opening the file "outImg_3" in an image viewer.


Generated on Thu Dec 11 09:32:25 2003 for OCR Shop XTR/API User Documentation by doxygen 1.3.2