As announced by US Federal CTO Aneesh Chopra during the Gov 2.0 Summit (see NextGov article), the US government is about to release an Open Government Directive, further to President Obama’s executive memo dated January 21st. According to a recent NextGov article,
The directive will lay out a structured schedule for the release of data in a machine-readable format and institute reporting requirements for agencies to describe how they will involve the public in open government initiatives
While the directive is expected to provide some much awaited guidance to agencies about open data, there are a number of important questions that need to be answered
The first two are raised by another NextGov article, asking about what agencies are supposed to do about data stored in PDF format, which is not machine readable, and how far back they should go in time to provide open data (i.e. how about historical archives?).
Here are some additional questions:
- Will agencies be forced to make all open (and machine readable) data available through data.gov, or will they be able to choose alternative or complementary avenues?
- Who is responsible for data quality, accuracy and timeliness on web sites – such as data.gov – in case they store copies rather than references to data?
- Is government responsible for monitoring how open data are being properly used (see previous post)? If so, is that responsibility with data.gov or with each agency?
- Is government responsible for monitoring whether and where open data are being copied, and how those copies are being used?
Maybe answers to questions like these are already available or will be addressed in some other document. But they are needed, if the vision of realizing value from open government data must scale up from a proof of concept to a basis for government evolution and transformation.