I’m not a particularly active dreamer, so when I get strong messages in my sleep I tend to pay attention. On Saturday morning I woke up with an odd idea in my head, which made me take notice. On Sunday I woke up with more of it in place, as if my dreaming self had been installing the idea in segments.
It’s a Singularity idea, although I don’t think it’s necessarily just a post-Singularity idea. And here’s the way I think I’m supposed to introduce it:
We understand cyberspace to be the virtual space between all the nodes on all our computer networks. And I’ve defined my concept of Spookworld as being everything that exists between the nodes of organized deception.
This new concept is called The Construct, defined as everything that exists between nodes of intent. And since I’m really introducing two ideas here (The Construct and “nodes of intent”), I’d better start by explaining the foundational idea: scaling humanity to the Law of Accelerating Returns.
If we accept the idea of The Singularity (and for the purposes of this essay, we will accept its outline without arguing its many details) and Kurzweil’s proposed date for its arrival (2045 A.D.), then one obvious consequence is that rate of change in human civilization over the coming decades will be expanding at a pace for which we have no historical precedent. That's going to make it difficult for most people to keep up with -- much less participate in -- our evolving culture.
Consider my life as an example. As a member of Generation Jones, I grew up with department stores, three TV channels, weekly news magazines, expensive long distance land-line phones, typewriters, Whiteout, handwritten letters and a crappy local newspaper that seldom gave me more than the box score from the Chicago Bears game.
Today I shop online, run a blog, use an RSS reader to keep up with my multiple information sources, watch video on the Web, use Twitter and Facebook, and pay for NFL Sunday Ticket’s Superfan package so I can watch all the Sunday games, including The Red Zone Channel. It's a wonderful explosion of resources, and yet as an "information worker" I feel horribly behind most of the time. With new resources come new competitive expectations, and most human beings aren't even keeping pace with the accelerating capacity. With bandwidth expanding so rapidly that it already defies our ability to grasp the consequences, what kind of world can I expect in my 80s? How will I adapt?
To put it another way, our global information flow is already superhuman. This causes us excitement and discomfort. And this process is not going to slow down, which will cause our culture to fracture and morph into something that internalizes the reality of this great acceleration in research, commerce and communication.
Which means we’ll have to build tools that scale to the problem: New, robotic methods of dealing with a culture that is animated by teraflops of data flowing through nonlinear networks. Most of these tools will fall under the general heading of informatics. The more advanced tools will likely arrive via the emerging field of discovery informatics, in which machines not only find the answers, they come up with the questions.
Informatics may help us find the signal in the static, but to really receive the benefit of informatics we will be forced to make compromises with some important values. Example: I’ve recently reset my Google Reader to collect only my top-priority sites and blogs. This keeps me up-to-date with lots of information sources, but doing so filters out all sorts of potentially useful and interesting sites. Not only that, but I still have to wade through a bunch of items that don't interest me.
A smart informatics bot could help me by searching EVERYTHING for those bits of content that would likely interest me, but to make that valuable I would have to compromise on my privacy (I would have to tell the bot what I like, and more ominously, give it access to my actual choices, so that it can LEARN what I REALLY like). Truly expanding my intentions and desires to the full scale of the wired world requires that I relinquish a certain level of control.
Nevertheless, we are making these choices. The new iGoogle suite includes useful tools that apply informatics to deliver me content and options, all while collecting valuable information about me. Same with Amazon and A9. I give up something to get something, and if I like what I get and I don’t feel abused, I’ll probably experiment with the next one to come along.
But I’m quite aware of where this is leading me: Eventually, I’ll have created something that acts as an automated extension of all my intentions, a robotic, multi-faceted proxy that will collect information, select products, make purchases and solve problems without requiring any physical act or conscious choice on my part. It will be constructed of multiple parts, bits of it will "belong" to multiple companies. But the end-result will be an automated expansion of my will, operating constantly in the background.
We have small examples of this now. Automated sell orders carry out the wishes of stock traders. I never have to think about my electric bill because I authorized the power company to withdraw the amount from my online banking account. But these are basically just dumb bots executing instructions.
What I expect to encounter, however, are intelligent agents that I will trust to select and organize information on my behalf. Eventually I'll ask these agents to take care of more complex tasks, such as analyzing vast pools of data based on what it understands about my desires and finances, and then actually making decisions and acting upon them. Right now I can instruct a bot to shoot me an e-mail when a flight to a particular destination is available at a particular price. Someday I’ll have a bot that analyzes my finances, my calendar, my vacation balance and my reading habits, and some happy day it will book us a cycling trip to Ireland because it spotted the perfect opportunity at a fantastic buy-today discount.
That’s an extreme example, but there you have it: What we’re really discussing are nodes of intent, all these automated extensions of ourselves crawling around in cyberspace, acting on our behalf. Which raises the question: What will be the mass effect of millions of virtual extensions of millions of human intentions?
The Construct
Since the term “the Media” still holds value today, let’s start there, because it’s the easiest consequence to imagine. If I have an informatic “news agent” bopping around the global flow of news and analysis and it’s learning my personality and interests from my actions, then over enough time it will become increasingly proficient at creating a timely, nuanced flow of content to meet my needs without wasting my time. It may not be able to pass a Turing Test, but its choices will seem highly intelligent.
Which means I’ll become very knowledgeable about what I want to know about, but utterly uninformed about other stuff. My intent will shape my mediascape, my intelligence, my biases, my language. My intent will create my experience of the world, because my intentions – disembodied and ubiquitous – will mediate the world that is revealed to me (which, come to think of it, sounds an awful lot like what the mystical traditions say about consciousness on the non-physical plane).
To the extent that there are others with similar intentions and values, those intersections will provide opportunities for relationships, cooperation and – in some cases – direct action. It’s not too hard to imagine some kind of super-Facebook that connects such people, recognizing and anticipating opportunities for meaningful or entertaining new connections.
So there you have the first level of The Construct: Politics, information and culture.
Will there be others? Sure. Commercial ones, obviously. But I suspect we’ll have elements of The Construct that represent every area of human desire and intention.
To clarify: I’m not talking about something that replaces existing institutions. I’m talking about something that expands our projection of “self” to fit the scale of the global information stream, followed by the inevitable interaction of millions of projected virtual selves, at a level that exists above our conscious awareness and without our conscious approval or mediation.
The Construct in conflict
If I’m thinking clearly on this idea, then it stands to reason that eventually our intentions will run afoul of each other. There are X hot new game consoles for sale and X times Y consumers who wish to purchase them. Just as human beings compete in meatspace for scarce items, so will our virtual selves – our agents of intention – enter into conflict.
If that’s a market function, maybe it creates fantastic opportunities – maybe the totality of expressed intention surrounding a product not only helps set the perfect price point for it, but enables the manufacturer to anticipate the proper product run. Because inherent in this concept is the notion that our intelligent agents will function within ranges, essentially negotiating on our behalf.
It’s easy to imagine how this would work in a market setting. But what happens when the intentions in conflict aren’t about purchasing game consoles or getting a discount flight to Ireland? And it’s easy to imagine that the same people who invent Nigerian Banking scams today will ply their trade in the future in little traps that target our virtual agents.
But what happens when the intentions at play are political?
Yes, like-minded political leaders will recruit our support through The Construct, but those who oppose our intentions will take an interest in our virtual selves as well. Every trick that can be played upon humans will eventually be played upon the restlessly prowling extensions of humanity. But in addition to the usual disinformation, F.U.D., propaganda and annoyances that are part of political campaigns, we can also look forward to more direct attacks on communities of intention.
Which brings me, as always, to bacteria. Colonies succeed in part by out-competing other colonies for finite resources. But successful colonies become chilling anti-competitive once they’ve acquired an advantage, releasing chemicals and chemical signals that deny other bacteria a chance to compete with them on even terms. Human beings are more like bacteria than we like to admit.
To me it stands to reason that political groups will use their own informatics tools to identify the intentions of supporters and opponents, then use that information to craft responses to political situations that reward supporters while – at the very least – denying opportunities to their opponents.
How will this play out? I could probably think up scenarios, but let’s cut to the chase: Political professionals get paid money to create and exploit advantages. Give them a subtle tool and they will design a dirty trick around it. Give them anything that allows for anonymity and their creative impulses will kick into overdrive.
And what about nations? Will China analyze the cloud of US intentions in search of ways to extend its own intent? And vice versa? Certainly. Same with corporations. If a thing can be defined as a distinct identity, with outcomes that are relatively good and relatively not-good, then that identity has intention. If those intentions can be expressed, inferred and detected, then its intent can be thwarted.
Which brings us to deception.
Hiding our intentions
Today, digital security is largely an issue of encryption, anonymity and protection of a few key pieces of private data. In the future, with data streams that surpass the utility of tools like “Click ‘Yes’ to allow” buttons, security will be about all of those things PLUS obscuring the impressions our individual intentions make upon The Construct.
With the right tools and with access to the right data streams, it will be possible to know practically everything about an individual and still not know his or her name. Which I suppose will be a likely outcome: Companies will go to great lengths to hide names and addresses and account numbers, but pattern-seeking robots prowling The Construct will be able to discern interests, habits and connections. And if you can pick my pattern out of the flow and focus in on what I do and what I want, my actual name becomes almost irrelevant.
The obvious answer is the wrong one. Sure, we could disconnect these tools from the Web, use other means to pick out what we buy, what we watch, what we read. We could pick our friends and lovers and collaborators the old fashioned way: geography and luck. Then again, we could protect our computers from viruses right now by disconnecting them from the Internet.
Instead, we’ll enter a new security arms race, only in this case, the key will be the ability to randomize the source of an intention. It does a pattern-seeker little good to know that I want a discount ticket to Ireland. It might do it considerable good to be able to link that intent to a larger set of intentions.
But why should anyone expect that such collections of intentions would be available to the public? Well, for starters, an intelligent agent that cannot connect to the proper data streams is useless, and an agent that cannot hop multiple streams and sources and formats isn’t going to be all that helpful, either. It stands to reason that “modern” data streams will eventually find their way into a few standard protocols as providers and consumers search for each other in the info-deluge. And you have to design these protocols with enough openness that they can be adapted to quickly changing demands.
A “secure, proprietary” network sounds great, but future networks that can’t communicate with other networks and the various nodes of intention will be like a computer without an Internet connection today: Useful only for word processing and playing obsolete games.
The virtual secretary on the global stage
Humans have often passed along their decision-making sovereignty to others: secretaries, house servants, personal assistants, diplomats. We train them, impart our wishes, and send them out to perform their tasks while we attend to other business. Will we trust a robot to do the same? I think so – since it’s likely the robots will, in time, become indistinguishable from “us.”
What’s really different isn’t the trust or the agent: It’s the way that all these actions become observable (The Construct). Like public health officials who drug-test sewage to see whether cocaine use in their city is rising or falling, so too will informatics applications study the vast array of shifting interactions on the Web to see what intentions are being expressed by the people. Sophisticated and nefarious users will likely drill down into private areas we’d rather they left alone.
I used to believe that if the human race, via blogs and other forms of expression and communication, was on its way to becoming self-aware, a concept I thought of as a global, holographic brain. Now I think it’s more likely that our self-awareness will come not so much from the quantification of what we publicly express, but from the quantification of what we privately, anonymously desire.
We understand, to varying degrees, how to activate desire and intention via rhetoric and advertising. But how will we “advertise” to purchasing robots? How will we quantify attraction? Hatred? Nationalism? Love?
I don’t have the answers, but I understand that it will be far more efficient to sell, persuade and connect large numbers of people robotically via The Construct than by traditional mass-mediated methods. Just as radio and television made “walking the district” impractical for most political candidates, so to will The Construct make media-to-human appeals expensive, inefficient on a per-unit basis, and – ultimately – rather quaint.
My bot will contact your bot and work out the details. Whether we care to know those details or not is really the question.
When this is read with the knowledge of the number of hits Xark receives for santa p*rn, the idea of private anonymous, desire becomes somewhat sinister. Likely I jump to that conclusion due to being in a very protective phase of my life.
Apparently I have always been fascinated by apocalyptic scenarios. Your thoughts reminded me of an argument I had with a teacher when I was in fourth or fifth grade. I was trying to explain to her, probably not very well, that I could see more than one kind of knowledge. There was higher learning and then survival skills. I was trying to convey the idea that everyone should know at least have rudimentary knowledge of growing food, making shelter, etc.
She told me I was being absolutely ridiculous when I tried to explain the value of that knowledge. Today I look around and see individuals quite helpless as a whole. We have people and machines to do everything for us. The Construct will choose what we know.
The idea of consuming only what is chosen for me is unsettling. I understand the Paradox of Choice, and am staggered by the amount of knowledge that I'll never be able to learn. I like the illusion of controlling my filters.
Posted by: Heather | Thursday, December 27, 2007 at 06:15
Yes, and I've come to understand that growing up "country" -- at least in the sense of having some familiarity with making, fixing and growing things -- is a real advantage no matter where you live and what you do as an adult. Agreed.
And maybe that's why I also view The Construct in some ways as just another tool. I might prefer an economy in which farmers could be successful using mule-drawn plows and ox-team combines, but these tools just don't scale to the 21st century economy. Same with our information tools.
So I don't see The Construct so much as "choosing for me" as I see it as an extension of myself. A pair of pliers extends the ability of my fingers to grasp physical things. An intelligent agent alertly prowling information streams extends my ability to grasp intangible things.
A tool is neither good nor bad. The intention behind the tool is the issue. And tools can be turned against us if we don't mind them and understand them.
Honestly, my biggest concern right now is the fracturing of culture, which I am beginning to suspect we're already witnessing.
Posted by: Daniel | Thursday, December 27, 2007 at 08:49
Just talking with Janet and it occurred to me: The original Construct -- disembodied agents of intention negotiating and acting -- is representative democracy. It's a much less efficient transmission of intention, but we invest our intent in our representatives via elections.
Posted by: Daniel | Thursday, December 27, 2007 at 08:55
Dan,
Very interesting post. While I would enjoy jibber jabbering about this with you over a long conversation and a pitcher of beer, I only want to provide one brief thought here. Since my thinking on these topics is always heavily influenced by McLuhan, I'll simply bring him in the conversation below. As you will see, this is pertinent to your first response to Heather. McLuhan would suggest that since every medium is indeed an "extension of self," you need to see it less as a tool and more of a prosthetic that ontologically becomes part of you and you it (i.e., it is not a tool; once it becomes dominant in a culture, it affects everyone whether used or not. The logic of TV culture, or internet culture, drives thinking so that even those who don't watch tv or don't use the internet are forced to operate under a logic shared by others). The point, in short, is that you cannot separate the tool from the user after awhile.
The following is from p. 11 of McLuhan's "Understanding Media: The Extensions of Man." Any google search of "McLuhan Sarnoff" will bring it up:
In accepting an honorary degree from the University of Notre Dame a few years ago, General David Sarnoff made this statement: "We are too prone to make technological instruments the scapegoats for the sins of those who wield them. The products of modern science are not in themselves good or bad; it is the way they are used that determines their value." That is the voice of the current somnambulism. Suppose we were to say, "Apple pie is in itself neither good nor bad; it is the way it is used that determines its value." Or, "The smallpox virus is in itself neither good nor bad; it is the way it is used that determines its value." Again, "Firearms are in themselves neither good nor bad; it is the way they are used that determines their value." That is, if the slugs reach the right people firearms are good. If the TV tube fires the right ammunition at the right people it is good. I am not being perverse. There is simply nothing in the Sarnoff statement that will bear scrutiny, for it ignores the nature of the medium, of any and all media, in the true Narcissus style of one hypnotized by the amputation and extension of his own being in a new technical form. General Sarnoff went on to explain his attitude to the technology of print, saying that it was true that print caused much trash to circulate, but it had also disseminated the Bible and the thoughts of seers and philosophers. It has never occurred to General Sarnoff that any technology could do anything but add itself on to what we already are.
Posted by: jmsloop | Thursday, December 27, 2007 at 10:29
Yes, brilliant. I absolutely agree. Thank you.
Posted by: Daniel | Thursday, December 27, 2007 at 11:23
Along those lines, these are Dave Weinberger's selections for video of the year. This first one, made by a Kansas State anthropology professor, makes some excellent points that connect to this post: "...with every post or photo we tag... we are teaching the machine... the machine... is us..."
Link.
Posted by: Daniel | Thursday, December 27, 2007 at 13:42
A good chance to post one of my heroes, Terrence McKenna speaking about intelligent machines and shamanism. This dude has made me smile for years and is a real inspiration. This is an art video with Terrence speaking in the background.
http://deoxy.org/video/-493424393639525435
Posted by: Mitchell Davis | Thursday, December 27, 2007 at 23:09
Dan: "What I expect to encounter, however, are intelligent agents that I will trust to select and organize information on my behalf."
Cool, can we call them journalists? I mean, how intelligent would they really have to be to qualify?
Posted by: Tim | Friday, December 28, 2007 at 17:02
How intelligent do you have to be to qualify as a journalist? Oh, about as intelligent as your average commissioned officer .
All witty banter aside, the joke misses the point: Journalists never really did work for YOU, individually. If they ever worked for you, it was the collective "YOU", and since you didn't sign their paychecks, that was never more than abstract, anyway.
The evolution of search moves from the ability to find links when you're thinking about a subject to the ability to have things that would interest you brought to you without your feedback. If you don't like the outcome, you reprogram the agent or acquire a new one.
Which means no more kvetching about "the media" because you'll control the mediation between yourself and the information.
Or something like that.
Posted by: Daniel | Saturday, December 29, 2007 at 15:15
Except that the agent won't be "working" for you either. Unless you can write your own intelligent agent with the knowledge needed to access the proprietary information infrastructure, the intelligent agent will actually be a Google agent or Amazon agent or some such that you'll provide attributes to personalize it.
In that sense, the intelligent agent (a cyber-prosthetic) is not much different than your cell phone or GPS and subject to the same infrastructural bias.
Being able to acquire a new agent is no different than switching stations or subscriptions.
re: journalist v. "average commissioned officer"
Who Do You Trust More: G.I. Joe or A.I. Joe?
If that's all it takes, journalists are screwed and people will still complain about their agents.Posted by: Tim | Saturday, December 29, 2007 at 21:52
The notion that "journalists are screwed" is pretty much stipulated around here now. At least "journalist" as formerly constituted.
And good point about the agent working for the maker. This is where we're already experiencing friction. However, as informatics becomes more widely understood and the Semantic Web becomes more standardized, I expect to see more options emerge. Who would have predicted Open Source would become what it is? Hell, if I were the Electronic Frontier Foundation, I'd be interested in developing these kinds of tools with various privacy protections as a way to raise cash for the organization.
Just musing.
Posted by: Daniel | Sunday, December 30, 2007 at 11:36
re: "Journalists never really did work for YOU, individually. If they ever worked for you, it was the collective "YOU", and since you didn't sign their paychecks, that was never more than abstract, anyway."
I refer you back to your previous comment and Jay Rosen's essay: Bush to Press: "You're Assuming That You Represent the Public. I Don't Accept That."
re: "However, as informatics becomes more widely understood and the Semantic Web becomes more standardized, I expect to see more options emerge."
OK. Can I ask you to consider not only what it means to be able to more directly and efficiently impose your "self" onto your cyber-representative within your construct, but what effect that has on representation as you scale up?
If you're willing to think about that, would you re-read Andy's Field Theory?
You might also enjoy reading Mind Versus Computer: Were Dreyfus and Winograd Right?
Posted by: Tim | Monday, December 31, 2007 at 11:28
re: McLuhan ... adding without contradicting
"Things I Used to Teach That I No Longer Believe" Was the Title of the Panel...
Posted by: Tim | Wednesday, January 02, 2008 at 11:57
William Gibson, Dec. 29.
Posted by: Daniel | Wednesday, January 02, 2008 at 20:51
I so love comments on your posts, darling. Absolutely revelatory.
Posted by: twitter.com/XarkGirl | Sunday, April 01, 2012 at 20:55