Massive amounts of unstructured data are held in the form of PDF documents, but extracting key figures and words out of PDFs in a programmatic manner can be difficult and costly. This poses a challenge to public-interest groups, journalists and others who are interested in running large-scale analyses on PDF documents in order to uncover valuable insights.
In a hackathon set for this week, participants will work on ways to improve the open-source software tools available for PDF data extraction.
"Say, for example, you want to model student loan securitizations," wrote Marc Joffe, principal consultant at Public Sector Credit Solutions and an organizer along with the Sunlight Foundation and others of the PDF Liberation Hackathon, in a guest post on the Mathbabe blog. "A corporation or well funded research institution can purchase an expensive, enterprise-level ETL (Extract-Transform-Load) tool to migrate data from the PDFs into a database. But this is not much help to insurgent modelers who want to produce open source work."
"Data journalists face a similar challenge," he added. "They often need to extract bulk data from PDFs to support their reporting. Examples include IRS Form 990s filed by non-profits and budgets issued by governments at all levels."
Data journalists have developed open-source PDF harvesting tools such as Tabula, Joffe added.
"Unfortunately, the free and low cost tools available to modelers, data journalists and transparency advocates have limitations that hinder their ability to handle large scale tasks," he wrote. "If, like me, you want to submit hundreds of PDFs to a software tool, press 'Go' and see large volumes of cleanly formatted data, you are out of luck."
The hackathon runs from Friday through Sunday and will be held at six sites, including the Sunlight Foundation's headquarters in Washington, D.C., according to the event's website. Remote participation is also possible.
Contestants will be able to work on "a PDF extraction challenge provided by one of our sponsoring organizations, can work on their own challenges or develop enhancements to an open source PDF extraction tool," according to the site.
While the use of open-source tools is encouraged, commercial tools are allowed as long as licensing costs less than US$1,000 and an unlimited trial is available.
It's true that some of the best tools for PDF extraction are proprietary and expensive, said analyst Curt Monash of Monash Research, who closely tracks the database and data-analysis market as well as public policy on technology.
"One of the leading filter/extraction libraries was bought by Verity, which was bought by Autonomy, which was bought by HP," he said via email on Thursday. "Another one, with a somewhat different orientation, was developed by Xerox, which spun it out as Inxight, which was bought by Business Objects, which was bought by SAP."
"It's worth remembering that there's a multi-stage process here," Monash added. "For example, a PDF can be converted to text (and image) data, (Name, value) pairs can be extracted. Those can have their spelling corrected. Then the company names can be regularized. In real life, there can be tens of steps."
As for the hackathon's potential value, "a large fraction of the world's interesting information is on paper, or in paper-like formats such as PDF," he added. "Of course it's worthwhile to make all that more accessible."
Chris Kanaracus covers enterprise software and general technology breaking news for The IDG News Service. Chris' email address is Chris_Kanaracus@idg.com
This story, "Hackathon Geared Toward the 'liberation' of Data From Public PDF Documents" was originally published by IDG News Service Boston Bureau.