{"id":2262,"date":"2010-10-30T21:08:52","date_gmt":"2010-10-30T21:08:52","guid":{"rendered":"http:\/\/conference.journalists.org\/2010conference\/?p=2262"},"modified":"2017-01-10T16:33:08","modified_gmt":"2017-01-10T16:33:08","slug":"news-app-developers-share-their-tips-and-tricks","status":"publish","type":"post","link":"https:\/\/ona10.journalists.org\/2010\/10\/30\/news-app-developers-share-their-tips-and-tricks\/","title":{"rendered":"News app developers share tips and tricks"},"content":{"rendered":"
Three news app developers spoke at ONA10 Saturday afternoon, each showcasing their unique approaches to displaying news in nontraditional ways.<\/p>\n
USA Today\u2019s Jarmul: Teamwork leads to ‘really cool things’<\/strong><\/p>\n For Katharine Jarmul, the sky is the limit when she works on a new multimedia story.<\/p>\n Jarmul, a designer and developer for USA Today<\/a>, showcased an elaborate package<\/a> she recently worked on to commemorate the five-year anniversary of Hurricane Katrina.<\/p>\n USA Today's Five Years Later: Hurricane Katrina employs Django and HTML5 and is optimized for mobile devices (top right).<\/p><\/div>\n Taking an unconventional approach is essential in online journalism, she said.<\/p>\n \u201cThe first thing we need to do as journalists is we have to start thinking outside of the box,\u201d she said. \u201cAnd this means \u2026 the box of what is a story. This also means the box of what technologies do we use.\u201d<\/p>\n The project was built in Django<\/a> and HTML5<\/a>, among other technologies, and took roughly two months to build, she said. It\u2019s optimized for mobile devices, including the iPad, and incorporates numerous video and interactive map elements.<\/p>\n Jarmul also said developers should not be intimidated by the size or scope of a project.<\/p>\n \u201cDon\u2019t be afraid to take on a large project, and figure it out as you go,\u201d she said. \u201cI think we need to challenge ourselves to kind of figure it out as we go along.\u201d<\/p>\n USA Today, Jarmul said, emphasizes teamwork. She echoed the words of Juan Thomassie, a senior designer at the paper, who spoke at a Saturday morning session on data visualization.<\/p>\n \u201cAs long as you have the developer and the designer working together in unison, then you can do really cool things,\u201d she said in a post-session interview.<\/p>\n PolitiFact\u2019s Waite: It’s all in the reporting<\/strong><\/p>\n PolitiFact<\/a>, the Pulitzer Prize-winning<\/a> political fact-checking website of the St. Petersburg Times<\/a>, is nothing without solid reporting, said Matt Waite, the site\u2019s developer.<\/p>\n The simplicity of PolitiFact's website belies the strong verification of its claims.<\/p><\/div>\n \u201cIf you\u2019re going to put the full faith and reputation of the St. Petersburg Times up for question by calling a liar a liar, you better have the goods,\u201d he said. \u201cYou better be right.”<\/p>\n PolitiFact is accountability journalism, which is hardly a new form of journalism, Waite said. However, Waite’s team presents accountability journalism in a new format, which is what sets the site apart.<\/p>\n \u201cWe approached what we were doing not as stories \u2026\u201d he said. \u201cWe approached it as a structured data problem of taking a type of story and extracting the structure out of it and rebuilding a content app out of that structure.\u201d<\/p>\n The content app succeeds, Waite said, because of its ability to present the public with the source materials that PolitiFact\u2019s editors use to rate a politician\u2019s claims. That is achieved by heavy use of linking and by using services such as DocumentCloud<\/a> to serve primary source documents.<\/p>\n Waite said more journalists and news app developers should be concerned with structured data problems rather than content problems.<\/p>\n \u201cWe as an industry are not doing nearly enough with this structured data approach, of viewing what we do as a structured data problem more than a content problem,\u201d he said.<\/p>\n ScraperWiki\u2019s quest to liberate the Web\u2019s data<\/strong><\/p>\n The London Metropolitan Police Service<\/a> provides the public with a high level of transparency: Each unit posts on its website what it’s focusing on at the moment.<\/p>\n But there\u2019s a catch. All that information is embedded in HTML, and to get a snapshot of what each unit is doing at once, one would have to go to 620 websites and find the data.<\/p>\n Enter ScraperWiki<\/a>. The entirely Web-based tool allows developers to automate the scraping process, said Richard Pope, one of the tool\u2019s developers.<\/p>\n Not only does ScraperWiki scrape data that\u2019s embedded in HTML, but it also stores and visualizes the data. The tool is available in three different, though popular, programming languages: PHP<\/a>, Python<\/a> and Ruby<\/a>.<\/p>\n In true wiki fashion, completed scrapers are posted on its website, allowing developers to re-purpose and improve them.<\/p>\n ScraperWiki is to data what Wikipedia<\/a> is to knowledge, Pope said.<\/p>\n","protected":false},"excerpt":{"rendered":"<\/a>
<\/a>