Open Government – An Application of Collective Intelligence?

This is one of those blogs, where I have very little to add. I think it is a brilliant idea and would like to see how it develops. From the Open Government site:

Some questions to consider in formulating ideas include:

  • How might the operations of government be made more transparent and accountable?
  • How might federal advisory committees, rulemaking or electronic rulemaking be better used to drive greater expertise into decisionmaking?
  • What alternative models exist to improve the quality of decisionmaking and increase opportunities for citizen participation?
  • What strategies might be employed to adopt greater use of Web 2.0 in agencies?
  • What policy impediments to innovation in government currently exist?
  • What is the best way to change the culture of government to embrace collaboration?
  • What changes in training or hiring of personnel would enhance innovation?
  • What performance measures are necessary to determine the effectiveness of open government policies?

A few comments:

1. Many open questions

2. A mention of Open Linked Data in addition to Web 2.0 would have been better

3. Combined with other intersting initiatives like Apps for America, this is one of the cool ways of harvesting the collective intelligence of the people.

LinkLog: InfoStreams and Embarassingly Parallel Data Analysis Tasks

I have been interested (but have not really done anything useful yet) in large scale data analysis. Here are some personal interests:

  1. Analyze the InfoStreams I track from Twitter, Blogs and our own Customized feeds on programming, multi-core and semantic web topics
  2. Explore Open Linked Data, visualization, connections and analysis
  3. Applying machine intelligence to understand raw data and notifications of change as well as tracking velocity of change.

This leads to dabbling in the semantic encoding of data (RDF/OWL), visualization techniques (processing), data analysis (R Language) and large scale streaming data (map/reduce, hadoop).

So when I stumbled across  Ben Lorica’s Big Data: SSD’s, R and Linked Data Streams I could not resist reading it. A few comments and some links below:

This is how I landed in this strangely named platform called Pig, a sub-project of Apache’s Hadoop. From the wiki:

Pig is a platform for analyzing large data sets that consists of a high-level language for expressing data analysis programs, coupled with infrastructure for evaluating these programs. The salient property of Pig programs is that their structure is amenable to substantial parallelization, which in turns enables them to handle very large data sets.

At the present time, Pig’s infrastructure layer consists of a compiler that produces sequences of Map-Reduce programs, for which large-scale parallel implementations already exist (e.g., the Hadoop subproject). Pig’s language layer currently consists of a textual language called Pig Latin, which has the following key properties:

  • Ease of programming. It is trivial to achieve parallel execution of simple, “embarrassingly parallel” data analysis tasks. Complex tasks comprised of multiple interrelated data transformations are explicitly encoded as data flow sequences, making them easy to write, understand, and maintain.
  • Optimization opportunities. The way in which tasks are encoded permits the system to optimize their execution automatically, allowing the user to focus on semantics rather than efficiency.
  • Extensibility. Users can create their own functions to do special-purpose processing.

Hope to give it a spin and try to see whether I can manage a drink from my InfoStreams firehose.