PDFlib TET v5.1 x86 & x64 (Text and Image Extraction Toolkit)
PDFlib TET v5.1 x86 & x64 (Text and Image Extraction Toolkit) + Crack
What is PDFlib TET?
PDFlib TET (Text and Image Extraction Toolkit) reliably extracts text, images and metadata from PDF documents. TET makes available the text contents of a PDF as Unicode strings, plus detailed color, glyph and font information as well as the position on the page. Raster images are extracted in common image formats. TET optionally converts PDF documents to an XML-based format called TETML which contains text and metadata as well as resource information. TET contains advanced content analysis algorithms for determining word boundaries, grouping text into columns and removing redundant text.
With PDFlib TET you can:
- Implement the PDF indexer for a search engine
- Repurpose text and images in PDFs
- Convert the contents of PDFs to other formats
- Process PDFs based on their contents, e.g. splitting based on headings (requires PDFlib+PDI in addition to TET)
- Check wether an area on the page is empty or contains any text, images, or vector graphics
- TET also includes the pCOS interface for querying details about a PDF document such as document information fields and XMP metadata, font lists, page size, and many more (see pCOS product description and pCOS Cookbook)
PDFlib TET 5 - Features:
The PDFlib Text and Image Extraction Toolkit (TET) is targeted at extracting text and images from PDF documents, but can also be used to retrieve other information from PDF.
PDFlib TET has been designed for stand-alone use, and does not require any third-party software. It is robust and suitable for multi-threaded server use; see how to use TET.
PDFlib TET provides the following powerful features and offers unique advantages for text extraction as well as unique advantages for image extraction.
Accepted PDF Input
TET supports all relevant flavors of PDF input:
- All PDF versions up to to Acrobat DC, including ISO 32000-1 and -2
- Protected PDFs which do not require a password for opening the document
- Damaged PDF documents will be repaired
All Writing Systems of the World
TET processes PDF documents in all writing systems of the world and implements special processing required for some scripts:
- Latin, Greek and Cyrillic scripts including dehyphenation
- Arabic and Hebrew including logical reordering of right-to-left and bidirectional text; normalization of Arabic presentation forms
- Simplified and Traditional Chinese, Japanese, and Korean regardless of encoding; horizontal and vertical text
- Indian scripts (without glyph reordering)
- All other languages and scripts supported in Unicode
Since text in PDF is usually not encoded in Unicode, PDFlib TET normalizes the text in a PDF document to Unicode:
- TET converts all text contents to Unicode, regardless of the encoding method used in the PDF document.
- Ligatures and other multi-character glyphs are decomposed into a sequence of the corresponding Unicode characters.
- Glyphs without appropriate Unicode mappings are identified as such, and are mapped to a configurable replacement character in order to avoid misinterpretation.
- TET implements various workarounds for problems with specific document creation packages, such as InDesign and TeX documents or PDFs generated on mainframe systems.
Content Analysis and Word Detection
TET includes patented content analysis algorithms:
- Determine word boundaries which are required to retrieve proper words
- Recombine the parts of hyphenated words (dehyphenation)
- Remove duplicate instances of text, e.g. shadow and artificially bolded text
- Recombine paragraphs in reading order
- Correctly order text which is scattered over the page
Page Layout, Table and List Detection
The page content is analyzed to determine text columns. Tables are detected, including cells which span multiple columns. This improves the ordering of the extracted text. Table rows and the contents of each table cell can be identified. Bulleted and numbered lists are identified.
TET provides precise metrics for the text, such as the position on the page, glyph widths, and text direction. Specific areas on the page can be excluded or included in the text extraction, e.g. to ignore headers and footers or margins.
TET analyzes color information in the PDF page description and returns precise color information for each glyph. This can be used, for example, to identify headings or other highlighted text.
Images on PDF pages can be extracted as TIFF, JPEG, JPEG 2000 or JBIG2 files. Precise geometric information (position, size, and angles) is reported for each image. Fragmented images are combined to larger images to facilitate repurposing. Image fidelity is guaranteed since no downsampling or color conversion occurs. This ensures the highest possible image quality.
The TET library includes the pCOS interface for querying details about a PDF document, such as document info and XMP metadata, font lists, page size, and many more.
Configuration Options for problematic PDF
TET contains special handling and workarounds for various kinds of PDF where the text cannot be extracted correctly with other products. In addition, it includes various configuration features to improve processing of problem documents:
- Unicode mapping can be customized via user-supplied tables for mapping character codes or glyph names to Unicode.
- PDFlib FontReporter is an auxiliary tool for analyzing fonts, encodings, and glyphs in PDF. It works as a plugin for Adobe Acrobat. This plugin is freely available for OS X/macOS and Windows.
- Embedded fonts are analyzed to find additional hints for Unicode mapping. External font files or system fonts are used to improve text extraction results if a font is not embedded.
TET supports various Unicode postprocessing steps which can be used to improve the extracted text:
- Foldings preserve, remove or replace characters, e.g. remove punctuation or characters from irrelevant scripts.
- Decompositions replace a character with an equivalent sequence of one or more other characters, e.g. replace narrow, wide or vertical Japanese characters or Latin superscript variants with their respective standard counterparts.
- Text can be converted to all four Unicode normalization forms, e.g. emit NFC form to meet the requirements for Web text or a database.
PDF documents may contain text in other places than the page contents. While most applications will deal with the page contents only, in many situations other document domains may be relevant as well. TET extracts the text from all of the following document domains:
- page contents
- predefined and custom document info entries
- XMP metadata on document and image level
- file attachments and PDF portfolios can be processed recursively
- form fields
- comments (annotations)
- general PDF properties can be queried, such as page count, conformance to standards like PDF/A or PDF/X, etc.
TET supports XMP metadata in several ways:
- Using the integrated pCOS interface, XMP metadata for the document, individual pages, images, or other parts of the document can be extracted programmatically.
- TETML output contains XMP document and image metadata if present in the PDF.
- Images extracted in the TIFF or JPEG formats contain image metadata if present in the PDF.
TETML represents PDF Contents as XML
TET optionally represents the PDF contents in an XML flavor called TETML. It contains a variety of PDF information in a form which can easily be processed with common XML tools. TETML contains the actual text plus optionally font and position information, resource details (fonts, images, colorspaces), and metadata.
TETML is governed by a corresponding XML schema to make sure that TET always creates consistent and reliable XML output. TETML can be processed with XSLT stylesheets, e.g. to apply certain filters or to convert TETML to other formats. Sample XSLT stylesheets for processing TETML are included in the TET distribution.
The following fragment shows TETML output with glyph details:
<Box llx="111.48" lly="636.33" urx="161.14" ury="654.33">
<Glyph font="F1" size="18" x="111.48" y="636.33" width="9.65">P</Glyph>
<Glyph font="F1" size="18" x="121.12" y="636.33" width="11.88">D</Glyph>
<Glyph font="F1" size="18" x="133.00" y="636.33" width="8.33">F</Glyph>
<Glyph font="F1" size="18" x="141.33" y="636.33" width="4.88">l</Glyph>
<Glyph font="F1" size="18" x="146.21" y="636.33" width="4.88">i</Glyph>
<Glyph font="F1" size="18" x="151.08" y="636.33" width="10.06">b</Glyph>
TET connectors provide the necessary glue code to interface TET with other software. The following TET connectors make PDF text extraction functionality available for various software environments:
- TET connector for the Lucene Search Engine
- TET connector for the Solr Search Server
- TET connector for the TIKA toolkit
- TET connector for Oracle Text
- TET connector for MediaWiki
- TET PDF IFilter for Microsoft products is available as a separate product. It extracts text and metadata from PDF documents and makes it available to search and retrieval software on Windows.
The TET Cookbook is a collection of programming examples which demonstrate the use of TET for various text and image extraction tasks. Several Cookbook samples show how to combine the TET and PDFlib+PDI products in order to enhance PDF documents, e.g. add bookmarks or links based on the text on the page.
DOWNLOAD NOW !