I’m not a particularly active dreamer, so when I get strong messages in my sleep I tend to pay attention. On Saturday morning I woke up with an odd idea in my head, which made me take notice. On Sunday I woke up with more of it in place, as if my dreaming self had been installing the idea in segments.
It’s a Singularity idea, although I don’t think it’s necessarily just a post-Singularity idea. And here’s the way I think I’m supposed to introduce it:
We understand cyberspace to be the virtual space between all the nodes on all our computer networks. And I’ve defined my concept of Spookworld as being everything that exists between the nodes of organized deception.
This new concept is called The Construct, defined as everything that exists between nodes of intent. And since I’m really introducing two ideas here (The Construct and “nodes of intent”), I’d better start by explaining the foundational idea: scaling humanity to the Law of Accelerating Returns.
If we accept the idea of The Singularity (and for the purposes of this essay, we will accept its outline without arguing its many details) and Kurzweil’s proposed date for its arrival (2045 A.D.), then one obvious consequence is that rate of change in human civilization over the coming decades will be expanding at a pace for which we have no historical precedent. That's going to make it difficult for most people to keep up with -- much less participate in -- our evolving culture.
Consider my life as an example. As a member of Generation Jones, I grew up with department stores, three TV channels, weekly news magazines, expensive long distance land-line phones, typewriters, Whiteout, handwritten letters and a crappy local newspaper that seldom gave me more than the box score from the Chicago Bears game.
Today I shop online, run a blog, use an RSS reader to keep up with my multiple information sources, watch video on the Web, use Twitter and Facebook, and pay for NFL Sunday Ticket’s Superfan package so I can watch all the Sunday games, including The Red Zone Channel. It's a wonderful explosion of resources, and yet as an "information worker" I feel horribly behind most of the time. With new resources come new competitive expectations, and most human beings aren't even keeping pace with the accelerating capacity. With bandwidth expanding so rapidly that it already defies our ability to grasp the consequences, what kind of world can I expect in my 80s? How will I adapt?
To put it another way, our global information flow is already superhuman. This causes us excitement and discomfort. And this process is not going to slow down, which will cause our culture to fracture and morph into something that internalizes the reality of this great acceleration in research, commerce and communication.
Which means we’ll have to build tools that scale to the problem: New, robotic methods of dealing with a culture that is animated by teraflops of data flowing through nonlinear networks. Most of these tools will fall under the general heading of informatics. The more advanced tools will likely arrive via the emerging field of discovery informatics, in which machines not only find the answers, they come up with the questions.
Informatics may help us find the signal in the static, but to really receive the benefit of informatics we will be forced to make compromises with some important values. Example: I’ve recently reset my Google Reader to collect only my top-priority sites and blogs. This keeps me up-to-date with lots of information sources, but doing so filters out all sorts of potentially useful and interesting sites. Not only that, but I still have to wade through a bunch of items that don't interest me.
A smart informatics bot could help me by searching EVERYTHING for those bits of content that would likely interest me, but to make that valuable I would have to compromise on my privacy (I would have to tell the bot what I like, and more ominously, give it access to my actual choices, so that it can LEARN what I REALLY like). Truly expanding my intentions and desires to the full scale of the wired world requires that I relinquish a certain level of control.
Nevertheless, we are making these choices. The new iGoogle suite includes useful tools that apply informatics to deliver me content and options, all while collecting valuable information about me. Same with Amazon and A9. I give up something to get something, and if I like what I get and I don’t feel abused, I’ll probably experiment with the next one to come along.
But I’m quite aware of where this is leading me: Eventually, I’ll have created something that acts as an automated extension of all my intentions, a robotic, multi-faceted proxy that will collect information, select products, make purchases and solve problems without requiring any physical act or conscious choice on my part. It will be constructed of multiple parts, bits of it will "belong" to multiple companies. But the end-result will be an automated expansion of my will, operating constantly in the background.
We have small examples of this now. Automated sell orders carry out the wishes of stock traders. I never have to think about my electric bill because I authorized the power company to withdraw the amount from my online banking account. But these are basically just dumb bots executing instructions.
What I expect to encounter, however, are intelligent agents that I will trust to select and organize information on my behalf. Eventually I'll ask these agents to take care of more complex tasks, such as analyzing vast pools of data based on what it understands about my desires and finances, and then actually making decisions and acting upon them. Right now I can instruct a bot to shoot me an e-mail when a flight to a particular destination is available at a particular price. Someday I’ll have a bot that analyzes my finances, my calendar, my vacation balance and my reading habits, and some happy day it will book us a cycling trip to Ireland because it spotted the perfect opportunity at a fantastic buy-today discount.
That’s an extreme example, but there you have it: What we’re really discussing are nodes of intent, all these automated extensions of ourselves crawling around in cyberspace, acting on our behalf. Which raises the question: What will be the mass effect of millions of virtual extensions of millions of human intentions?
Since the term “the Media” still holds value today, let’s start there, because it’s the easiest consequence to imagine. If I have an informatic “news agent” bopping around the global flow of news and analysis and it’s learning my personality and interests from my actions, then over enough time it will become increasingly proficient at creating a timely, nuanced flow of content to meet my needs without wasting my time. It may not be able to pass a Turing Test, but its choices will seem highly intelligent.
Which means I’ll become very knowledgeable about what I want to know about, but utterly uninformed about other stuff. My intent will shape my mediascape, my intelligence, my biases, my language. My intent will create my experience of the world, because my intentions – disembodied and ubiquitous – will mediate the world that is revealed to me (which, come to think of it, sounds an awful lot like what the mystical traditions say about consciousness on the non-physical plane).
To the extent that there are others with similar intentions and values, those intersections will provide opportunities for relationships, cooperation and – in some cases – direct action. It’s not too hard to imagine some kind of super-Facebook that connects such people, recognizing and anticipating opportunities for meaningful or entertaining new connections.
So there you have the first level of The Construct: Politics, information and culture.
Will there be others? Sure. Commercial ones, obviously. But I suspect we’ll have elements of The Construct that represent every area of human desire and intention.
To clarify: I’m not talking about something that replaces existing institutions. I’m talking about something that expands our projection of “self” to fit the scale of the global information stream, followed by the inevitable interaction of millions of projected virtual selves, at a level that exists above our conscious awareness and without our conscious approval or mediation.
The Construct in conflict
If I’m thinking clearly on this idea, then it stands to reason that eventually our intentions will run afoul of each other. There are X hot new game consoles for sale and X times Y consumers who wish to purchase them. Just as human beings compete in meatspace for scarce items, so will our virtual selves – our agents of intention – enter into conflict.
If that’s a market function, maybe it creates fantastic opportunities – maybe the totality of expressed intention surrounding a product not only helps set the perfect price point for it, but enables the manufacturer to anticipate the proper product run. Because inherent in this concept is the notion that our intelligent agents will function within ranges, essentially negotiating on our behalf.
It’s easy to imagine how this would work in a market setting. But what happens when the intentions in conflict aren’t about purchasing game consoles or getting a discount flight to Ireland? And it’s easy to imagine that the same people who invent Nigerian Banking scams today will ply their trade in the future in little traps that target our virtual agents.
But what happens when the intentions at play are political?
Yes, like-minded political leaders will recruit our support through The Construct, but those who oppose our intentions will take an interest in our virtual selves as well. Every trick that can be played upon humans will eventually be played upon the restlessly prowling extensions of humanity. But in addition to the usual disinformation, F.U.D., propaganda and annoyances that are part of political campaigns, we can also look forward to more direct attacks on communities of intention.
Which brings me, as always, to bacteria. Colonies succeed in part by out-competing other colonies for finite resources. But successful colonies become chilling anti-competitive once they’ve acquired an advantage, releasing chemicals and chemical signals that deny other bacteria a chance to compete with them on even terms. Human beings are more like bacteria than we like to admit.
To me it stands to reason that political groups will use their own informatics tools to identify the intentions of supporters and opponents, then use that information to craft responses to political situations that reward supporters while – at the very least – denying opportunities to their opponents.
How will this play out? I could probably think up scenarios, but let’s cut to the chase: Political professionals get paid money to create and exploit advantages. Give them a subtle tool and they will design a dirty trick around it. Give them anything that allows for anonymity and their creative impulses will kick into overdrive.
And what about nations? Will China analyze the cloud of US intentions in search of ways to extend its own intent? And vice versa? Certainly. Same with corporations. If a thing can be defined as a distinct identity, with outcomes that are relatively good and relatively not-good, then that identity has intention. If those intentions can be expressed, inferred and detected, then its intent can be thwarted.
Which brings us to deception.
Hiding our intentions
Today, digital security is largely an issue of encryption, anonymity and protection of a few key pieces of private data. In the future, with data streams that surpass the utility of tools like “Click ‘Yes’ to allow” buttons, security will be about all of those things PLUS obscuring the impressions our individual intentions make upon The Construct.
With the right tools and with access to the right data streams, it will be possible to know practically everything about an individual and still not know his or her name. Which I suppose will be a likely outcome: Companies will go to great lengths to hide names and addresses and account numbers, but pattern-seeking robots prowling The Construct will be able to discern interests, habits and connections. And if you can pick my pattern out of the flow and focus in on what I do and what I want, my actual name becomes almost irrelevant.
The obvious answer is the wrong one. Sure, we could disconnect these tools from the Web, use other means to pick out what we buy, what we watch, what we read. We could pick our friends and lovers and collaborators the old fashioned way: geography and luck. Then again, we could protect our computers from viruses right now by disconnecting them from the Internet.
Instead, we’ll enter a new security arms race, only in this case, the key will be the ability to randomize the source of an intention. It does a pattern-seeker little good to know that I want a discount ticket to Ireland. It might do it considerable good to be able to link that intent to a larger set of intentions.
But why should anyone expect that such collections of intentions would be available to the public? Well, for starters, an intelligent agent that cannot connect to the proper data streams is useless, and an agent that cannot hop multiple streams and sources and formats isn’t going to be all that helpful, either. It stands to reason that “modern” data streams will eventually find their way into a few standard protocols as providers and consumers search for each other in the info-deluge. And you have to design these protocols with enough openness that they can be adapted to quickly changing demands.
A “secure, proprietary” network sounds great, but future networks that can’t communicate with other networks and the various nodes of intention will be like a computer without an Internet connection today: Useful only for word processing and playing obsolete games.
The virtual secretary on the global stage
Humans have often passed along their decision-making sovereignty to others: secretaries, house servants, personal assistants, diplomats. We train them, impart our wishes, and send them out to perform their tasks while we attend to other business. Will we trust a robot to do the same? I think so – since it’s likely the robots will, in time, become indistinguishable from “us.”
What’s really different isn’t the trust or the agent: It’s the way that all these actions become observable (The Construct). Like public health officials who drug-test sewage to see whether cocaine use in their city is rising or falling, so too will informatics applications study the vast array of shifting interactions on the Web to see what intentions are being expressed by the people. Sophisticated and nefarious users will likely drill down into private areas we’d rather they left alone.
I used to believe that if the human race, via blogs and other forms of expression and communication, was on its way to becoming self-aware, a concept I thought of as a global, holographic brain. Now I think it’s more likely that our self-awareness will come not so much from the quantification of what we publicly express, but from the quantification of what we privately, anonymously desire.
We understand, to varying degrees, how to activate desire and intention via rhetoric and advertising. But how will we “advertise” to purchasing robots? How will we quantify attraction? Hatred? Nationalism? Love?
I don’t have the answers, but I understand that it will be far more efficient to sell, persuade and connect large numbers of people robotically via The Construct than by traditional mass-mediated methods. Just as radio and television made “walking the district” impractical for most political candidates, so to will The Construct make media-to-human appeals expensive, inefficient on a per-unit basis, and – ultimately – rather quaint.
My bot will contact your bot and work out the details. Whether we care to know those details or not is really the question.