Online Voting for Public Policy vs. Consensus Building: Is That a Choice?

KeyboardVoting

Although I haven’t seen a lot of discussion about this so far, the emergence of interactive tools for online citizen engagement poses interesting questions for the future of public policy consensus building. It’s still early days in the development of the web technology, and experiments so far have been spotty. Despite the high orbit of ideas about revolutionizing democracy, for example, the Obama Administration’s Open Government Initiative, seeking ideas about transparency and collaboration, has come under criticism for its choice of online tools, inadequate management of content and mixed results.

If you should, for example, have the impossible patience to read through the hundreds of pages of raw data from the first phase of that process, you might eventually find the relevant, substantive ideas buried among thousands of demands for immediate action on this or that favorite complaint as well as the now infamous extremist and crackpot rants.

Fortunately, the process worked well in pulling out those substantive ideas for further development in the next two phases. White House staff are now evaluating those methods while additional federal initiatives proceed for health care, homeland security, defense and other issues.

The immediate focus of these efforts is soliciting ideas from the public without any attempt to build consensus or initiate dialogue among groups or individuals of differing values. All the software systems used for these projects employ various methods for rating the ideas and proposals. Often touted by their developers as the key for letting the best ideas “rise to the top,” rating and voting systems have nothing to do with careful evaluation of competing proposals, let alone dialogue or deliberation.

It’s hard to imagine that this approach, while able to identify winners for further consideration, could possibly substitute for intensive face-to-face negotiation on high stakes issues. Given the current state of technology (remembering that the entire history of a widely accessible web platform covers only 15 years), that seems to be quite obvious.

But I’d like to take a closer look at these voting systems, crude as they are today. I think of most voting as the polar opposite of consensus building. As is true of online versions, it sets up a competition to see which proposals come out on top. Opposing interests do not communicate at all. Systems of this sort invite gaming to pump more and more votes into favored ideas by separate constituencies. During the Open Government Initiative, for example, every group concerned about online democracy alerted its members to this opportunity and urged them to register at the site and support specific proposals by commenting and voting. Organized groups of every political persuasion likely did the same thing.

The use of a simple up-or-down vote left no room for subtle distinctions. Each interest group had to wind up with enough votes for its favorite ideas to get to the next stage of the process. It’s essentially a numbers game. What could be worse from the point of view of a collaborative leader or practitioner who is trying to build agreement around policies that meet the needs of all affected interests?

But let’s turn this view around and look at online voting from the perspective of a public agency decision-maker. The process used by the California city of Santa Cruz is an interesting example. As explained in an earlier post, the city was faced with its worst budget crisis in memory and needed to make major changes. The Mayor and City Manager wanted to develop policies on the sensitive issues of how to spend and raise money that would be broadly supported by the residents. The typical public meetings tended to be dominated by the loudest voices, and city staff felt that approach would not give the results they needed.

So they turned to the idea of an online process for generating public proposals, but a much simpler one than that used by the Open Government Initiative. The public had the opportunity to offer proposals in a process of one phase instead of three and did so through a single user interface rather than three completely different ones. Each participant received 10 votes to distribute as desired, instead of a thumbs up or down indicator, and a high minimum threshold of votes was set for a proposal to merit possible consideration.

Materials about the budget were available on the site, and updates were regularly posted. The interface used a single series of tabs for viewing the proposals and indicating which had been accepted for implementation and which completed. (One of the interesting features of the software is its scalability for everything from a single organization to a small city to a national effort.)

What I find interesting about this process – apart from the fact that it provided a better user experience than the federal experiment did – is that the crisis facing the city was one that might also have been approached by using collaborative, face-to-face methods.

The city could have called on a consultant to develop a series of community-wide assemblies to develop a shared vision and from that derive action steps and priorities. Or they could have formed a consensus building group of stakeholders representing the city’s varied constituencies to negotiate policies. They could also have used a budget priority-setting process that focused on participation by agency staff and elected leaders while seeking public input as well.

They didn’t take that route, but from the viewpoint of city decision-makers, did they achieve a similar result? True, there was no deliberation among the citizens, no exchange of views, no negotiation to come to agreement. What they needed, however, was a set of proposals backed by an indication of broad support from the public, proposals the city could immediately use as a basis for developing solutions to the crisis. And they needed to identify those proposals in a short period of time so that the citizens of Santa Cruz could be assured that the budget crisis was being addressed in a timely manner.

Clearly, there is little basis for comparison between the two approaches. On the one hand, you have a live process that generates group cohesion and enthusiasm as well as specific proposals for action. If successful, that produces results that are far less likely to be challenged and therefore can be adopted and implemented on a tight schedule. On the other, you have online participation relying on contributions from individuals – though many of these are doubtless contributing proposals developed by organized interest groups.

I doubt that public decision-makers would be overly concerned about how they got the result they were looking for. They’d look at cost and the end product. If the policies and action steps they wind up with pass the political test of public scrutiny, elected leadership has what it needs.

I’m not really expecting the demand for collaborative policy processes to disappear in favor of online activities. It’s time, though, for practitioners and leaders in this field to look closely at how they can make use of these tools as a regular part of their services. It will be much easier to work with them rather than against them, especially since the technologies will become much more sophisticated and adaptable to consensus building than they may now appear.

Share

Comments are closed.