When the discussion on Professor Jay Rosen's PressThink post "Grokking Woodward" bogged down yesterday into an epistemological duel over whether the "Bush Bubble" meme was based on credible facts or leftwing media bias, it gave me a chance to trot out one of my favorite questions:
Point-by-point, context-free deconstruction of credibility doesn't add anything to civic discourse. It just bogs it down. So this is where we wind up: Two sides, each speaking a different language, based on completely indepedent ideas of what's going on, shouting at each other ... while a few people in the middle haggle over arcane points about credibility, all of which require so much research to confirm independently that normal people just give up and go watch "Survivor."
We need to create some kind of new information tool that helps us manage these situations, so that basic facts can be established and stipulated. If we don't trust the government and we don't trust the media and we don't trust each other, how can we get anywhere? We know how to build websites and blogs and news wires ... but how do (we) build trust in the 21st century?
A few months ago I changed the slogan at my media blog to "Fight the FUD (Fear, Uncertainty and Doubt)", because I was appalled by the FUD campaigns I was witnessing on topics like global climate change and terrorism. In media, FUD is the ultimate fallback of the status quo -- not because it advances a position, but because it hinders the ability of dissenters to present a cohesive challenge to those in power. You can't win a media war by using FUD, but its deployment can prevent your opponent from defeating you. FUD is the weapon of last resort for anyone who benefits from stalemate.
Consequently, the culture war in America is today's equivalent of the Western Front, circa 1916-17: Two opposing armies hunkered down in great bloody trenches, neither able to maneuver or advance. Hundreds of thousands of young men were mowed down for meaningless gains because of two easy-to-spot flaws that the leaders on the ground simply could not see: 1. All the industrial-age technologies of 1914-1917 favored the defender; and 2. Generals on both sides refused to abandon romantic doctrines from the previous century.
Two things ended World War I: The entry of the United States on the side of the Allies (the most profound mistake in American history, IMHO -- but that's neither here nor there) and the invention of the tank. The first shifted the balance of power by infusing new energy into the equation, and the second broke the stalemate by restoring maneuver to the battlefield.
Which brings us to the present, and three questions:
- Which group will enter the fight and influence the outcome?
- What technological tools will break the FUD stalemate?
- Can we hope for a better outcome than what we saw in 1919?
Maybe the first question ought to be, who (or what) is in conflict? Because if the issue is just a clash between the Democratic and Republican parties, then who really cares? Political parties bicker. We're used to that, and we tend to shift our alligiences as new generations come into their majority. If this is just a partisan food fight, then why bother to think too deeply about it?
I propose that the conflict is far broader and much more misleading, that the subject isn't traditional politics at all, and that the media isn't just a weapon in the fight -- it's the battleground.
So let me answer the last question first: Can we hope for a better outcome than what we saw in 1919? And I say no -- unless we understand what we're fighting. If the culture war is really a battle between Americans for control over each others' lives, then there can be no victory. On the other hand, if we see the culture war as a struggle between entrenched and emerging economic interests, then a meaningful and positive outcome is possible.
Americans have competing interests, but we also have cooperative interests -- and I propose that the culture war is a trumped-up conflict intended to keep us from recognizing them. The enemy, therefore, is not the soldier in the opposing trench, but those who manipulate his or her passions.
So, which group can enter this fray and shift the balance of power? I say it's us: bloggers, networkers, programmers, code monkeys, grassroots journalists, activists who are more devoted to their communities than to political parties. If we attempt to resolve the conflict by backing one side against the other, then the war continues and any victory will be hollow. It's up to us to recognize that the enemy isn't liberalism or conservatism -- two great western traditions that have long balanced each other -- but those who manipulate information against the cause of reason.
We can shift the conversation in our culture by focusing our considerable talents on the thing that literally defines us as a species: We can build better tools.
In the 20st century, mass media defined and informed us. In the 21st century, the emergence of networked media -- basically, blogs like this one and blog-writing readers like you -- upsets that apple cart. We are at the beginning of the Web 2.0 revolution.
So what might a Web 2.0-style solution to the problem demonstrated on PressThink look like? some possible models:
- The board-monitored non-profit: Our modern civic life has been shaped by public-interest foundation grants for so long that an entire sub-economy has grown up in their shadow. Consequently, this is the standard response from the grant-based sector: If you want to build a non-partisan knowledge base that everyone in the culture can use and trust, then you start by getting various foundations to pitch in, ask a bunch of respectable people from "both sides" to serve on the board of directors, and then create some sort of mechanism for reviewing factual claims and presenting the results online. This is what you'd do in the 20th century -- and it won't work. In the end, such an approach uses a 21st century medium to provide a 20th century solution -- a non-transparent, mediated, appeal-to-authority mish-mash.
- The Super-Wiki: Rather than trust in any outside authority, you tackle the problem the Jimmy Wales/Wikipedia way -- trust that an empowered network of users and contributors will work out the differences if given the right collaborative tools and the proper organizational culture. You'd collect factual claims in a wiki format and -- by establishing the proper transparent rules at the beginning -- trust that the outcome would provide a useful resource for everyone. In a sense, one could argue that the Wikimedia Foundation is already involved in just such a project... but to do so would require acknowledging that Wikipedia's greatest weakness is its ability to handle controversial material. Furthermore, since one of the most significant foundations of conservative thought is that human nature is not perfectable, Wikipedia's appeal to the better angels of our nature will be a non-starter for too many users. Remember: The proper solution is one on which we can all agree.
- Credibility grading: I've been talking about this for years -- the notion that our new media capabilities give us the tools to begin grading the credibility of information in dynamic, real-time ways. The difficulty many people have with this concept is that I want it applied not only to media, but to the instutions and information sources that media cover. While plenty of media critics would have been glad to see a green-to-red "credibility rating bar" floating over Dan Rather's head as he read the news, few of them would have liked the results had the same standard been applied to their favorite political candidate. In fact, one problem with credibility ratings is that they beg the epsitemological question: How do we know what we know? One possible solution might be to focus credibility grading more on the process (sources cited, levels of expertise, etc.) and less on track record (WMDs, yes or no? Tawanna Brawley raped or not?), conflicts of interest (Global warming critics funded by oil industry groups?) or levels of competency. It might also be possible to design credibility grades that track all sorts of factors -- so that a politician might score poorly on his scientific claims but have high credibility on his statements about national security. Whatever the system, the key to credibility grading is that is must be clickable and transparent all the way back to each individual story, claim or source document. If it's based on my opinion of a source's credibility, it's worthless as a reference guide.
- Discovery Informatics: Imagine for a moment a computer program -- an intelligent agent, if you will -- that resides in the media stream, constantly searching the world's daily flood of information for patterns. What if that agent's mission was to spot only those patterns that relate to the validity of claims and statements -- no matter where they occur -- and that the results of its work are instantly correlated with each individual writer, source, group or institution. The outcome could be a real-time credibility grade that changes with the news -- an open-source, pattern-seeking robot with an infinite memory and the ability to see connections and patterns across enormous bandwidth. Such a robot might have given Donald Rumsfeld a relatively healthy credibility grade in February of 2003, but his trend line would be pretty much buried by now. Such a system has enormous potential -- but requires that we place some degree of confidence in the accuracy and fairness of the source code. Such a system might be moderated by board of DI experts, with feedback from various parties.
- The Provisional Super-Library: Given the right funding, supervision and transparency, it might be possible to combine several of the approaches listed here into an ongoing database project that breaks subjects down by degrees of confidence in their component parts, connects them via hyperlinks and stays value-neutral in terms its presentation. Such an approach might not offer general, combined credibility grades, but it would keep track of the truthfulness of statements. George W. Bush's Aug. 9, 2001 statement that there were "more than 60 genetically diverse" embroyonic stem cell lines available to researchers is demonstrably not true according to scientists with knowledge of the subject. If it were possible to involve experts in the factually policing of the subjects to which they have expertise, then a provisional super-library could collect and cross-reference such reports. Consequently, the President's intentionally misleading statement about stem cells would be deemed "Not true," with all relevant facts and related claims threaded together. However, on other topics -- say, Sen. James Inhofe's 2003 claim that global warming might be "the greatest hoax every perpetrated on the American people" -- no such concrete conclusion would be possible. A provisional super-library might note the circumstances and evaluate the factual claims Inhofe used to support his statement, but in circumstances where proof is not yet available or possible, it would settle for comment on relative degrees of confidence. Being provisional, it could always update those records as new bits of information became available. Such a system wouldn't allow users the instant gratification of graphic credibility grades, but it could be used as a common point of reference in online writing. A careful writer would be sure to link to the library to support her points. The question remains, though: How would you write the rules to make all sides feel confident that the subject matter has been treated fairly? How would you conduct oversight and quality control?
- Non-Obvious Relationship Networking: Undisclosed conflicts of interest and conspiratorial connections lie at the heart of most modern cynical attitudes toward media, government and power. But what if you could create an online, open-source system that tracks and reveals such hidden relationships? Such systems already exist -- they're used extensively by national security officials, law enforcement and intelligence agencies to search for clues and areas for further investigation. Most are presented in graphic webs, in which proximity, position, size, color and thickness of lines communicate the relative level of interaction between individual nodes in a network. Such a system can be transparent, clickable back to original record or source, and they're great for anyone who wants to understand the interests and credibility of any information source. By making them public, you'd be giving the public the kind of insight that very few people enjoy today. But such systems have one obvious, glaring drawback: Making such networks public threatens personal privacy in unprecedented ways. Even if all the aspects of the network are available from public sources, their connection -- and the ease with which this networking discloses relationships -- changes the nature of public records.
I don't have the answers to these questions, but I'm convinced that we should begin talking about this subject. The mass media can only be improved by a finite degree -- beyond that, we must focus on improving the ability of each end-user (that would be us) to evaluate the information as presented. I think the solution is to build 21st century tools that make use of 21st century media and 21st century thinking.
How could we bring these approaches together? What other tools are we missing? What approaches have I overlooked?
I haven't had enough time to think through these options, but I think you're right - for us to have any hope, we have to find a way out of non-debate of contemporary culture.
Also, I think that underlying all this is a fundamental optimism about human nature: it assume that we really WANT the truth, that we're open to having our prejudices and convictions overturned if we find a source we deem credible. I hope that's true -- I'm just not sure it is.
This is pessimistic and not really helpful, but I had another thought reading the "Grokking Woodward" article. From a different angle, Rosen was making the same point Bill Moyers made months ago: "the delusional is no longer marginal."
Empiricism, Rosen assumes, did used to help us weed out delusion. I think he's right and I think it did. I know their are all sorts of philosophical argumetns about the lack of real objectivity in things like empiricism, but I think they are overstated and overblown and mostly good for getting tenure. We need to find some way to recognize what seems obvious: while their may be no such thing of objectivity, some things get much closer to it than others. A blanket "their is no objectivity" just isn't good enough for us to talk to each other meaningfully.
Maybe this is where credibility ratings come in. I confess to having no clue how to make that work, and I think it assumes a more tech savvy world than actually exists. For all the blogosphere bluster, taking blogs seriously is still kind of cutting edge. No one I know -- and I'm under 30 -- reads them as much as I do.
Here's a guess: our affluence allows us to live in a way I call delusional. Eventually, our delusions will lead us into some sort of devasting colliision with reality - environmental, economic, military, or other -- which will end our affluence. Then, perhaps, sheer necessity and desperation will force people to cut the crap, ditch their ideological shells, and deal with whatever reality seems most empirically true. With our protetective cushion of affluence gone, rhetoric will change.
So my pessimistic -- extremetly pessimistic -- take? It'll take a disaster of some sort to end the self-interested posturing. And even then, it might not work. We might only get finger pointing.
I could ramble further, but that's it for now.
Posted by: benbrazil | Wednesday, December 14, 2005 at 21:59
Awesome.
Posted by: Jay Rosen | Thursday, December 15, 2005 at 00:57
Ben, very nice. We are going to have a collision with reality (eventually) and the only question is how bad will it be? People who sense that something will have to happen to jerk us out of the current bitter malaise are both dreading it and looking forward to it. I would venture to say that the interest in a flu pandemic, dramatic climate change and other extreme events comes tinged with a little morbid hope.
Not that we want the miserable reality of such a dramatic event, but we are so ready for change.
Kinda like labor. There are few other options if you want a baby, but you know it ain't gonna be fun. But after nine months of waddling around and not being able to breathe, you're ready to go through whatever it takes. And, you hope, there will be drugs ...
Maybe we won't have to birth a brave new world. Maybe we can adopt. But regardless, it's going to be a lot of work.
Trust boils down to a relationship between me and you. Certain sectors of our world have been actively working to destroy our willingness, or even ability, to trust anyone. But disinformation always comes full circle on the perpetrator. It's inevitable. Teach me to trust know one and eventually I don't trust you.
The Web, more than any other medium, allows us to build trust at our own pace, with our own standards. I suspect that ideas and great journalism are not what is going to power the Web to the next level.
It will be vacuum cleaners. And toasters and coffee makers and all the things I can find out about online. I can read reviews of products and stores. I can vent my bad experiences in a way that is meaningful and far more effective than a letter to the company president.
Commerce is going to be the thing that speeds the net revolution. And we can take those lessons and apply them to the world of information. Here's my suggestions for making me trust you.
1. Let me see what others think about you.
2. Don't sell me crap.
3. Deliver on your promises.
4. Give me products/ideas that improve my life.
That's it.
Posted by: Janet Edens | Thursday, December 15, 2005 at 10:42
one more.
5. Make good on your mistakes.
Posted by: Janet Edens | Thursday, December 15, 2005 at 13:44
For a model of "authority" that's built for twitter but could generalize easily to other directed social graphs, consider http://tunkrank.com/. It's all about attention economics as a way of getting a handle on whom we trust.
Daniel Tunkelang, chief scientist at Endeca, conceived of the algorithm and explains it here:
http://thenoisychannel.com/2009/01/13/a-twitter-analog-to-pagerank/
PS. I now remember reading this post and loving it more than a year ago. Awesome.
Posted by: Josh Young | Friday, March 20, 2009 at 11:25