Print Share helps readers find digital version versions of print articles they’ve read simply by taking a picture of the story.
What it does
Over the past few years news consumers have become accustomed to sharing articles they’ve read online — sometimes by copying and pasting the URL and sometimes via a dedicated “Share” button.
Print Share helps make sharing articles readers find in print nearly as easy. With PrintShare a reader she can take out her phone, open Print Share, snap a photo of the story she wants to share, and upload the photo.
PrintShare automatically returns a link to the corresponding web article. If Print Share fails to find the right link on the first try, the user can crop her picture so that only the headline or one block of text is selected, which makes it easier for Print Share to find accurate results.
If further refinement is needed, the user can enter additional details such as the publication, article author, or date. Once she has the right link, she can copy and paste it or send it directly to Twitter.
How it works
Print Share uses Google’s Tesseract optical character recognition platform. When a user takes or uploads a picture, a JQuery plugin called JCrop sends the selected portion of the image to the Print Share server and to Tesseract. Tesseract converts the picture into text, which is sent to the Google Custom Search API. Included in the query are search refinements specified by the user. The first five search results are presented to the user, who can then either visit the links or send them to Twitter.
- OCR improvement. More intelligent image processing could potentially break users’ photos into sections containing headlines, authors, dates, and body text.
- Introduction of reverse image searching could allow users to find articles by taking photographs of the actual pictures in the articles themselves.
- Development of a true mobile app.
- Integration of more social media and sharing options.
- More refined Google search function that is focused specifically on the most commonly used sites.
Student team: Yuxi He, Richard Herndon, and David Ryan
Faculty guidance: Larry Birnbaum, Rich Gordon