Roadkill on the Information Highway lecture by Nathan Myhrvold

This is a transcript of the Nathan Myhrvold video here:

Hi. I’m Nathan Myhrvold, and I’m gonna talk to you today about roadkill on the information highway. Now, any sufficiently complex and interesting topic is always reduced to a series of silly cliches. And so it is with a set of technology that winds up being referred to in the press as the information highway.

When you’re presented with a choice, you either have to completely choose the silly cliché, or wallow in it, and you’ll see we’ll probably wallow in it a little bit today. But there’s a very serious issue here, which is how does computing and communication come together to change our world? How will that change the landscape for the people involved competitively there? How will it change the technology? And ultimately, how will it change society itself?

Now, the foundations of this information highway phenomena really rest on two fundamental technologies. VLSI, the chip technology that gives the raw power to computing, and software, which harnesses that raw power for end users’ needs. I’m primarily a software guy, but we’ll talk a bunch about hardware today because it’s very important to understand what capabilities the hardware is gonna provide for us.

Over the last 20 years, there’s been an explosion in the price/performance ratio. Meaning at a constant price, the performance of computers has gone up enormously. At a constant level of performance, the price has dropped precipitously. It’s been about a factor of a million increase in the last 20 years, and from all we can tell, the next 20 years will have another factor of a million. And with any luck, the 20 years thereafter has another factor of a million.

In tossing factors of a million around, it’s hard to get a grasp on what that really means. For reference, a factor of a million takes a year into 30 seconds. That says a computer 20 years from now will do in 30 seconds what today’s computers would take a year to do. 40 years hence, the computers will do in 30 seconds what one of today’s machines, at comparable cost, would take a million years to do.

Now that realm of increase in performance is so large that it stretches credulity. It’s almost ridiculous. People’s eyes glaze over and say, “Oh no. That couldn’t be. Something’s gonna happen.” But I’m basically here today to say I don’t think that something unusual will happen. I think we’ll get those factors.

That’s what’s changed the computing world today. That’s why we have microprocessors and digital electronics and computers increasingly in our lives. Over the next 20-40 years, that’s going to change even more so.

The ability to store information has also gone up. RAM memory or semiconductor memory increases by about 4X in density every 18 months. And that has happened historically for a very long time. The price of RAM drops at about 30% per year in a very steady fashion. Hard disks or magnetic storage decreases as well, about 60% per year.

I have a general rule of thumb. The size of your hard disk that you may have on a computer today for a computer user, that’s how much RAM you’ll have in between 3-5 years. And the size of your hard disk will expand accordingly. That rule has been true for me for as long as I’ve had a computer or been involved in them, and all of these technologies are moving in a direction that that will remain true. That’s even without breakthroughs in optical storage technology, which could revolutionize both the fast main memory storage with things like holographic memory, or mass storage with new kinds of CD-ROM and writeable optical media. Not only are we able to compute more and more, we’re able to store more and more. This is gonna be a fundamental piece of what happens with the information highway.

Here’s a chart that shows what I’ve been talking about. This shows the number of bytes you get of dynamic RAM memory per dollar. This is a semi-log chart, so it’s a logarithmic scale. You can see it’s a nice straight line. The line goes back to 1970. That’s almost 25 years that we’ve had remarkably steady, exponential decreases in the price per bit of memory. The last few data points in the chart are extrapolated out to the year 2000. I think there’s every reason to believe that this phenomena is gonna continue.

In fact, if you look at the solid state physics that’s involved, you’ll find that people already have a very good idea as to how they’re gonna continue to improve the density of RAM, how they’ll continue to improve the price/performance ratio of processors. The fundamental physics is there. What people need to do is learn how to make it more cost-effective, manufacture in volume, make it reliable and cheap. People are very good at doing that once the fundamental capability is there.

The real lesson behind all of this is the importance of being exponential. If you know anything about exponential growth, what you know that it’s the asymptotic scaling that matters. Anything which has a fixed threshold of performance, has a fixed amount of computing power, is rapidly overwhelmed. Even things that grow or grow exponentially are overwhelmed if the growth rate winds up being slower. That’s why mainframe computers lost to microprocessors. They had exponential increase in performance too, just not at the same growth rate the microprocessor-based systems had.

This leads to a fascinating phenomena. This unimaginable performance is going to go and blow by any fixed thresholds. On the other hand, there’s still some problems that are gonna be very hard that no amount of computing power, 40 years, 100 years, 1,000 years hence, will be able to solve. An example is a class of problems called NP hard problems in computer science. Consider a simple example where you have n objects and you take all combinations of those n objects, all different orderings. Well, the number is n!. For three objects, the number is six.

But it grows very rapidly with the number. If you take 59 objects and put them in all possible combinations, which really isn’t all that many objects. It’s only a little larger than the number of cards in a deck of playing cards. The total number of combinations is about 10^80th. Cosmologists estimate that’s the number of baryons … that’s heavy particles, protons, neutrons, things like that … in the entire universe. If you did manage to calculate that number, you couldn’t print it out unless you used all the matter and all the energy in the universe to actually make the printout.

Clearly that problem is not gonna be solved anywhere near, in any finite period of time, and that’s only 59. If you make the number larger, it gets worse. The trick going forward is gonna be to figure out which problems will fall to the exponential rise of computing and communications, and which will remain. That’s the real challenge for the coming decades.

Here’s another interesting chart. It’s a chart of Microsoft stock price. And like the other chart, this is a semi-log chart. Exponential growth. Here’s the arithmetic version. Why am I showing you this? Well, whenever we talk to stock market analysts, people say, “Gee, Microsoft stock has been going up a lot. How come that’s so? Isn’t that a very unusual circumstance?” Well, it’s really very unusual if you take it from the first principle’s perspective. But not if you recognize that we’re surfing on a wave. A wave of computing created by the increase in performance of semiconductors, price performance.

Basically, every time a new processor comes out and is twice as fast, we have more opportunity to add value by creating new products. Every time RAM gets larger, we have an opportunity to develop more and larger programs and sell them to people. To show this effect, I took this Microsoft stock price and I divided it out by that previous graph of the memory price, and you get this. It’s almost flat.

Now, this only takes the price of dynamic RAMs into account. If I also took the CPU price and the hard disk price and made an overall index, the curve would be absolutely flat. I have a conclusion that I draw from this. Software is a gas. It expands to fit the container it’s in. In our case, the container is the VLSI technology. The CPU cycles software gets to burn, the memory that we get to store things in. And god bless those guys making the containers. As long as they keep making them larger, we’re gonna keep having an ability to add value with software.

I used to be a cosmologist, actually, and I have another way of viewing this. It’s like selling real estate in an inflationary universe. You keep selling stuff, but the universe keeps expanding exponentially.

There’s a specific business strategy you have to follow if you want to keep surfing on this wave of exponential growth. And that’s to measure your success not by the traditional means of revenue and profit or market share. You should measure your success by the percentage of CPU cycles you consume. By the percentage of RAM that you occupy. And so our strategy at Microsoft has been to say, “Let’s follow the microprocessor.” And we have had to change the mix of our products to do that. We had to move from being a company that first wrote a programming language, BASIC, to developing operating systems, DOS and Windows. Developing graphical applications. More recently you’ve seen announcements or you may have seen announcements that we’re doing multimedia titles, encyclopedias and titles on baseball and dinosaurs and a variety of other things.

Finally, in my group, we’re working on a variety of new platforms. Intelligent televisions, servers that sit on these broadband networks I’ll talk about in a little bit. Tiny computers that will fit in your pocket. Wherever there are microprocessors and memory, there’s a job for software. And if you want to maintain your share of the world’s cycles, you have to change your software product mix in order to follow that VLSI wherever it goes.

There are a few bottlenecks. We talked about enormous exponential growth. Turns out there’s a key network that is not gonna grow fast enough. It will become a major bottleneck for some things. It won’t be a bottleneck for others. The network I’m talking about isn’t the phone network. It isn’t the cable network. It’s the human nervous system.

You see, our input and output is limited, and we’re not growing our capabilities exponentially. Human beings only take a certain amount of information in and a certain amount of information out, and that’s a fixed number. It’s one of those fixed thresholds that computing is just gonna blow by.

I don’t know how to build the peripherals that will be used in this system, how you can get touch and sense and other things built. But you can estimate what you would do if you had those peripherals. How hard is the pure computing task? What do we have to do? How far away are the ultimate data types, the complete perfect human interface that mimics reality as much as possible, or unreality? But that manages to saturate our I/O bus that gets the maximum amount of information in and out.

It’s interesting to actually look at a couple of the senses and figure out how hard that would be and what sort of limits might come up, so let’s take a look at them. Taste and smell are not appropriate for many programs. Using them as an I/O means for a computer program is gonna be somewhat specialized. And I don’t know how on earth we’re gonna connect those up to a computer, how we manage whether we jack into our central nervous system or we have some weird peripheral that puts little drops of chemicals on our tongues.

But we can calculate what the fundamental data type is and estimate how much it would take to compute that, synthesize and manipulate it. It turns out people have done a variety of physiological experiments to see how many unique tastes we can actually taste. And they’ve dropped little drops of stuff on people’s tongues and asked them to fill out questionnaires and so forth.

It turns out that the range of taste and also smell is quite limited. Something in the order of 50,000 unique different tastes and smell elements. Some people actually break it down to smaller than that, but conservatively, let’s say 50,000 different elements. Turns out the time resolution of smell and taste is very low. You don’t have thousands of tastes and smells per second. You have in the order of a few tastes and smells per second.

If we compare this with, say, CD audio. CD audio has two 16-bit samples 44,000 times a second. Here we’re talking about one 16-bit sample to get 64,000 different tastes and smells. We got a few bits of amplitude on top of that. We probably only sample it 10 times or 100 times a second. The total bandwidth is far less than audio. Presumably, it’s far easier to synthesize, calculate, store.

On the day that you can jack in and get taste and smell, we’ll discover it’s really not all that hard. We have all the computing power necessary to do it today. It doesn’t require any great breakthroughs in terms of the computing aspect of the problem.

Touch is another great one. Obviously video is something that’s quite common in computers these days. On a video screen, you tend to divide the screen up into a bunch of pixels or picture elements. Well, let’s estimate how many touch elements or “touchels” you’ll need. Again, we assume we have little discreet elements. Well, once again, there’s been some physiological tests that have been done where people try to estimate the touch resolution people have in various parts of their bodies. They poke people with rods of various sizes and shapes.

The surprising conclusion is, we have very poor touch resolution everywhere except our hands, our lips and a couple other places. Otherwise, the resolution is quite low. I was replicating some of these experiments, poking myself with these various rods to see if I could tell the difference. Somebody walked in the room and I had to explain, really, this is research. This is for work.

It turns out that the total size of your body that has this high resolution stuff is also quite limited. In fact, to do an experiment there, I took some paper towels and covered the size of the monitor that I use for a computer. That’s got about a 100 dots per inch resolution for a decent quality computer monitor these days. That’s also about the same resolution you have on the high-sensitivity parts of your body. It’s about 100 touchels per inch, would be about the maximum density.

Then the question is, does the screen have greater area than your body? And of course you can do that by cutting that paper towel out and applying it to the sensitive parts of your body. You really don’t wanna get caught doing that experiment. But it turns out that in fact it’s about the same.

If you assume you have somewhere between 8 and 24 bits of resolution per touchel, you have about the same total number of touchels as you have in a high-res computer screen. The bandwidth is only about the same as video. Now, maybe I’m wrong. Maybe there’s some additional factors in there. Suppose it was 10X video? Remember, if you double every year, the factor of 10 only takes you about three and a half years before you’re there.

The story here is that although, again, I don’t know how we’ll get touch sensitivity with computers, the total data feed isn’t all that big a deal. It’s not gonna be any harder to synthesize. It won’t be any harder to ship around or store than video is. You put this together with the taste … We already know how to do video reasonably well and they’re making it stereo. Completely saturating humans’ ability to do input and output is gonna be over within a few years. What it means is the computing is gonna have to move to other challenges, that providing the ultimate user interface is a temporary, desirable, but hardly a final state.

Now, we’ve talked a bunch about computing, storing information, about calculating stuff. But what about communicating it? Well, the world of communication is one that hasn’t followed any of these laws of exponential growth. And, in fact, you can make a very strong analogy between a central office telephone switch and a mainframe. Both giant systems. They have a similar kind of a culture. They have very similar sorts of margins and costs, et cetera.

You can make an analogy that the PBX that people have inside a company, which is a smaller-scale switch, is a lot like a minicomputer. Literally, it’s based on minicomputer technology, but again, the aspects of that industry are very similar to the aspects of the minicomputer world.

Well, minicomputers’ mainframes ruled the world, computing-wise, until microprocessor-based systems came in. Starting with personal computers and workstations and now large servers, the microprocessor has decimated the ranks of the mainframe and minicomputer world. And I think a similar thing is gonna be happening in communications because of two key technologies. The first is ATM switching. The other is fiber optics.

For many years, fiber optics has had the ability to pipe huge amounts of information over long distances. You can modulate these lasers that are used in the fiber very, very well, so getting information from one point to another via fiber is commonplace. Essentially, all long-distance phone calls go that way today.

The problem has been that you couldn’t get that high speed switched or delivered to the right place. You could move the bits, and if it was point to point you were okay, but you couldn’t actually have a network that would get the information from one point to another. That’s where ATM switches come in. And I believe that ATM switches and that whole technology area is the equivalent for the communications world to what the microprocessor was for computing. ATM switches follow VSLI price/performance curves. They are based on a small number for such a large number, but of replicated, cheap pieces of VLSI.

ATM switching allows new entrants to come into the market. Just as a variety of start-up companies came in and revolutionized the world of personal computing, we’re gonna find dozens of start-up companies coming in in the ATM switch area. I think that a variety of the existing switch people are gonna also be making great switches. I don’t mean it’s limited to that. But we’re gonna see a change here where people are very happy to get their 56KB or ISDN 64KB lines today. That’s high-tech in wide-area networking, whereas that’s gonna be ridiculous in just a few years. And that industry’s gonna restructure completely as a result.

But that’s the technology level. There’s also some interesting service aspects of that. We go to what I call the communications rollercoaster. Your phone bill hasn’t followed an exponential price curve. It hasn’t dropped by a factor of two every year. Nor has the amount of data that you send. It expanded by a factor of two at the same cost. It’s basically been static.

Well, now we have ATM technology. We have fiber optics. And we have a third factor, competition, coming in. Those three things are gonna combine to make the communications world change overnight. Now overnight may take five years, may take 10 years to do, but in the historical context we’re gonna go from voice being a very expensive sort of a service to voice essentially being free.

In fact, you can calculate the numbers. A lot of people in the communications world are gearing up for video on demand service where they say, “We’ll offer you a pay per view movie in your home.” They’ll have to charge … Nobody knows exactly what they’ll charge, but they’ll have to charge something like $3, $4 for that. If they charged more, they wouldn’t be competitive with the existing Blockbuster store.

And out of that $3 or $4, they have to pay Tom Cruise and the guys in Hollywood, whoever the stars are. Those guys have to get some money and distributors have to get some money. The raw communications cost is probably only about $1 or 50 cents per hour. 50 cents per hour for 4 megabits per second. If you compare that to what you have today for voice, you get 9600 VOD service, which costs, for most long-distance calls, between 30 and 60 cents a minute. That’s a factor of 10,000 different in price.

I believe that we’ll see a time when voice calls, even long-distance voice calls, are free. Not free by themselves, but someone will say, “Hey, if you sign up for our video on demand service and our video telephone service and you sign up for all of that, we’ll let you have the voice side for free,” betting that you’ll move across.

One of the other factors to consider here is that the economics of the communications business is gonna be turned on its head. The way that public utility commissions and the networking companies today think is in terms of the enormous value of their installed equipment. Well, it is valuable, but you have to remember that the new equipment will probably be a factor of two better for the same price every year.

Whoever is operating these networks has to go on a very intense schedule of upgrading them. They also have to worry that …

A schedule of upgrading them. They also have to worry that if they don’t upgrade, some new guy’s going to come in, pay a fraction of what they paid to put the things in originally, and have much better service. It’s going to be a hell of a ride. But ultimately, I think both for the companies in that business and for the consumers, it’s going to be a real thrill too.

What sort of network are we talking about? We’ve sort of talked around the edges. I think the overall system that we foresee is a switched digital network that offers point to point high bandwidth digital communications, and on which you hang a wide variety of different devices. This is interesting analogy to the electrical system. When Thomas Edison invented the light bulb, it became the killer app, the key thing, that focused people’s minds on electrification. When electricity was first installed in American homes, it was installed as a dedicated lighting system. In fact, in large cities, it replaced an earlier dedicated lighting system based on gas, gas lights. Now, we don’t think of electricity as a dedicated lighting system anymore. Sure, we have lights, but we also plug in our Cuisinart and our stereo and our computer and our electric razor. It’s a general utility.

The same thing has happened in the communications world. Today, you have two dedicated networks. You have a cable TV network, dedicated in the notion it’ll deliver you video. You have a telephone network, dedicated in the notion it delivers you point to point communications. Those are going to evolve as we look forward into a general information utility. You’ll have a bit socket, like the RJ11 jack you have today. Into that bit socket, you’ll plug your personal computer, and you’ll plug your camcorder when you want to send pictures of the kids to grandma. You’ll have your smart TV, your smart cable box. You’ll have some dumb cable boxes. You’ll have wireless phones and smart phones, and you’ll have a wide variety of servers and other systems that are set up in order to supply information.

This isn’t about the telephone taking over the world. It’s not about the TV or the set top box taking over the world. It’s not about the personal computer taking over the world. What we’re talking about here is a general information utility. People like to talk, “Will the TV win over the PC?” They’ll both win. Not only those, your water heater will be connected. Every electrical device will ultimately be connected to this information utility, and offer you the ability to do demand-side power management, security, a wide variety of different kinds of information usage. In fact, we’ll think of information as just as fundamentally utility as we think of electricity today.

Now, in looking at how this world is going to evolve, there’s a variety of aspects of this information. What do you mean information? What kinds of information? How will it alter? I think one of the interesting ways to look at it is to divide things into two sides, the pure information addressing aspects. Are you sending something to one person or to many people? Is it point to point, one to one, or one to many? Also, look at the temporal aspects in time. Is it synchronous, like a telephone call when both parties have to be on the line at the same time, or is it asynchronous, or offline, so that the two parties can be completely decoupled in time?

Well, you can make a list of these things. Examples of an online one to many service would include things like television and radio. We all have to be there when The Simpsons start, and if not, they start without us. We’re all synced up. Telephones and most computer networks are examples of point to point communications. We’re sending something from one place to another place. Telephone is certainly a synchronous example, or online example. The offline side, a book or a magazine is a classic one to many offline thing. You don’t care when the book was written, it could have been written a hundred years before you were born. It fundamentally was written for a wide audience, not just for you. Finally, there is point to point off line, electronic mail, fax, ordinary postal service. Again, you have a decoupling of time, but you have a point to point address.

Now, within each of these categories, I’ve described a variety of different information utilities, each of which has very different characteristics today. That’s going away. Because once you have this kind of information transmittal means, storage means in your hand, you wind up finding that everything within a box winds up becoming quite similar. The difference between say a record album, which is one kind of one to many offline thing and a book, well that’s just different kinds of data. Once you’re storing them all digitally, what does it matter? Fundamentally, you see the world collapsing into two kinds of services. There’s digital data that’s online, a digital phone call, a digital video call, et cetera. There’s digital data which is offline, either via a store and forward system, or perhaps it’s on an optical storage disc.

I think we’ll see a lot of things move from the online category to offline. Why should we all have to wait for The Simpsons to come on at a particular time? We’ve made ourselves slaves to the machines, slaves to the system. You should be able to watch a TV show, a movie, anytime you like. Doesn’t mean everything’s offline of course. There’ll still be late breaking news stories that’ll come on that you’re going to want to watch at that point in time, but by and large, many of the things that are multicast and online will move offline. Similarly, many of the things which you had a very long time constant for, you’re going to be able to get instantaneously. Ultimately, as we look forward to these kinds of information, we discover that the factors which survive the best are those that are the most generic, the addressing capabilities and the temporal aspects.

To get more information on how this is going to happen, we have to look for analogies. It’s hard to find something that has the same characteristics as this information highway explosion will have, unless you go very far back, back to the first information revolution, when Johann Gutenberg invented the printing press and completely changed the way people thought about information. I’ve got an analysis of that, based on what I call document demographics.

Consider the total number of documents, say, published each year, versus the total number of readers that that information was dedicated to. In the zero column are notes to yourself. People take notes, they’re not intended for any reader other than the author, it’s not for any kind of distribution really, it’s as a memory aid. Well, then you’ve got letters, personal letters to a person, business letters, et cetera. Those exist in one to a small number of copies. Once you get up to a higher volume above a hundred, you’re probably not sending letters. It’s probably things like ads, brochures, newsletters. Finally, get up above about 10,000, you have books, magazines, newspapers, things of that sort. You can estimate what the shape of that curve is. You can do that by figuring out how much notebook paper is sold, how many Post-It Notes are sold, what’s the combined circulation of newspapers, magazines, books, et cetera. I’ve got a schematic curve there sort of illustrating this.

The thing that’s fascinating to me about this curve is that print media, which is basically what we’re talking about, is very mature. Print media is driven more by the fundamental desire of the people who are using it, than by the technology, although technology’s played an important role. It gives us an interesting way of looking at what might happen, I believe will happen, for online digital information. Now, within each of these different ranges of documents, there’s a characteristic technology used for reproduction, for making the copies, for actually getting the copies out to people.

For the zero case, it’s pen or pencil. That’s how a document that gets no distribution other than the author is written. From one copy up to a hundred copies, you’re in another realm, the realm of the photocopier, the Xerox machine. That’s revolutionized that area. From 100 to 10,000 copies, you’re really in the realm of desktop publishing. Laser printers, they’re important in the smaller range too, but laser printers, and small offset presses and desktop publishing, really come into their own between 100 and 10,000 copies. Finally, when you get above say 5 or 10,000 copies and up, you’re in the realm of commercial printing. I say around 10,000 because that’s the minimum number you do really serious commercial printing for. Most books, regardless of whether they’re some very popular book or they’re some very obscure scientific tome, aren’t printed in less than about 5 or 10,000 copies. It’s just not worth starting the presses if you don’t do that.

Now, in addition to reproduction, there’s a characteristic distribution technology. How do you take those copies you’ve made for people, and physically get them to the people who need to see them? Well, once again, distribution’s not a problem when you’re in the zero case. From zero to a hundred, you’re probably either using by hand, you’re physically handing people or your interoffice mail is taking it. Perhaps you’re mailing them. Between 100 and 10,000 copies, there’s kind of an awkward phase. How do you send 1,000 copies of something? It’s too small a number to go into commercial distribution. Instead, what you have to do, pretty much, is use the mail. There’s no good way of getting it out other than that. Most of the documents in that are either given free, they’re ads, or if they’re sold, they’re usually fairly expensive. If you subscribe to an industry newsletter, they usually cost $100 to $1,000 a year. It’s quite expensive compared to, say, a popular magazine. Once you get above 10,000 copies, you have the commercial world of distribution, retail, et cetera, where people either use the mail in the case of magazines, they use newsstands, bookstores, paperboys. There’s a specialty distribution system, it’s all set up for that domain.

Now, there’s a fundamental lesson to learn here. Each technology has characteristic economics, and that economics is what shapes the whole field. You may think of it in other terms, but in fact, the price per copy was an enormous barrier to people making photocopies at one time. Was changed enormously by Xerox. In addition to direct economic costs, there’s the convenience, the ability to go up to a machine and press a button, changed things enormously. In fact, we can go back and look at what the effects of each of these kinds of information delivery would have been before the technology changed the economics. In fact, the lesson is, when you change the economics of the information distribution, you change the world.

Here’s a chart where I’ve superimposed on the original one what you used to do before Xerox, before desktop publishing, and before Gutenberg. Well, before Xerox, you could make a photostat or a mimeograph. There were ways of making copies. You could use carbon paper, but you had to hit those keys awful hard to make more than about two copies. In fact, there’s a hugely fewer number of copies made. You can estimate this by looking at the sales of copiers and copier paper. People basically did without having large numbers of copies. As soon as Xerox made them feasible, they exploded. People found a need for all of this. It’s hard to imagine if you see the use of Xerox machines today how on earth we could have survived without it.

The same thing was true, qualitatively, if you look at the next phase up, for desktop publishing. Prior to desktop publishing and cheap offset printing being able to be done, small distribution documents either weren’t done, or they were done very carefully, because they had to be hand set in lead type. It was one of these typesetting machines would melt hot lead and do this huge amount of effort. It was very expensive. Could cost up to $1,000 a page to get something typeset and camera-ready copy prepared. Desktop publishing enormously changed the number of documents in that range, both in terms of quality and number, by making it cheap and easy to do.

Finally, commercial printing was utterly revolutionized by Gutenberg. Prior to Gutenberg, there was some monks that would carefully copy a small number of documents, but they fundamentally had a very different means. It wasn’t a distribution means. Books were an object. It was a beautiful thing. They did these wonderful illuminated manuscripts, but a book was no different than a sculpture or a painting. It wasn’t something that large numbers of people got. Something you’d come and venerate in a museum or a monastery. Gutenberg changed that, and in changing it created this first information revolution. The lesson we learn is that every time you make it easier, either more convenient or cheaper or both, it creates a whole new industry. Billions of dollars change hands. But even more than that, the world changes. The world after Gutenberg was a literate world. It was a world where information would flow, where people had to learn to read. Similarly, we’re going to see this kind of change happening again, because I believe there’s a fundamental need here. These existing technologies in the print world have sampled something very fundamental.

Now, we can see that by looking at the distribution of what happens for consumer information today. Let’s take videotape. Millions of people have camcorders, and take pictures of the kids or their vacation or the dog or whatever. That’s very much like notes to one’s self. But after that, the curve drops off like a rock. There’s some wedding videos, maybe make 10 copies of those, there’s training videos, but from there you get this huge desert from three copies out to 10,000 copies. What do you do? Video is extremely expensive to produce, a lot like typesetting used to be. It’s very hard to distribute. What do you do, you mail people tapes, you have some mail order thing. There’s no good way of getting it out. Once you get above 10,000 copies, you discover that there is a market, a commercial market, in video cassette rental, cable television, broadcast television, et cetera. But it’s a very funny curve.

In fact, qualitatively speaking, it’s exactly like the curve before Gutenberg, before desktop publishing, and before Xerox. Consumer information today, from a technological perspective, is way back from what print is. Now, I believe there’s a fundamental need expressed here, a thirst that people have for information. With electronic distribution we have the chance to fundamentally raise that curve. Now, this is a radical view in many ways. If you live in the current world of, say, video information, you think that the world is all about having a small number of people transmit information to many. It’s a small to many phenomenon, so it’s Steven Spielberg, and TV producers, and the anchor people on CNN. Those are the ones that need to communicate to all of us. We don’t need to communicate to each other in this medium. Wrong.

The thing that’s constant, the lesson you learn consistently from the print world, is that people want information at all scales. In fact, there’s far more information distributed in small volumes than in large. Sure, there’s going to be Steve Spielbergs that make a Jurassic Park 4 that 100,000,000 people have to see. But there’s also going to be communications from your mother. Communications from my mother aren’t of interest to anybody else, but we all have a mom. We all have jobs. We all have purchase orders and forms and memos and a variety of pieces of written information that we use that will transfer to the digital world.

In fact, the general lesson here is that authors are everywhere. You have to have a scalable system. You have to have a system that allows you to support everything ranging from the person making notes to themself all the way up to the Steve Spielbergs or somebody else, making a document or a creation that millions or billions of people will see. If you sit there, and you think only about, “Yeah, this is just about Hollywood entertainment,” one to very, very many, you’ll miss out. Of course, by the same token, if you think it’s only about the other end of the curve, you’ll miss out. It’s really about the full gamut. Now, this vision is based on the fundamental belief that there is that thirst for information, that we all do want and need to be authors at various levels. But I think the print history is going to bear us out. It’ll be interesting to see if that’s true.

There’ll be a variety of false starts along this information highway. If you listen at a very high level, everyone seems to be saying the same thing, “Wow, it’s going to change your life. It’s going to be great. It’s going to be wonderful. We’re all into it.” When you really look in detail, you discover almost everyone is doing something different. Different in the details, some different in crucial details. The other thing that you’ll find is that there’s going to be very many more experiments than there are successes. In the early days of the PC industry, dozens of machines came and went. This is a natural and healthy part of rapid evolution. When you have people trying to apply their creativity to the maximum, this happens. But it also means that there’s going to be a lot of things that look really great that turn out not to be.

Data processing is an example of an area that’s going to be enormously changed. Today, if you look at a large data processing system, a traditional mainframe center, you might think about the SABRE system that American Airlines uses. It’s a huge system by many means. Four terabytes of data, does about 3,600 transactions per second. It’s a whole series of large IBM mainframes in a set of disc farms. But if you replicated that system today, with multiple PCs, you’d discover that you could build the whole thing for maybe $650,000. It would require about 10 large PCs running NTs and databases and discs. If you look at the year 2000, you discover that the whole system will fit on a PC.

Now, the fascinating thing about this is that there’s no room for it to grow. It’s an example of something that will be blown past by exponential growth, because there’s only so many travel agents that only type so fast. They’re not breeding like rabbits. Neither is human population, at least at these kinds of rates. There’s only so many airplanes. No matter what happens, the size of that data can never grow fast enough to beat the exponential growth of computing. This is a problem that’s destined to fall to that, to go from being a giant problem, it’s a miracle that they can get it together, to something that anybody with a PC can set up. Doesn’t mean that SABRE will go away, or the service will go away, but anyone who’s betting on the barrier to entry being this giant data center, they’ll be surprised.

In fact, we’ll see an increasing number of these things happen. As exponentially increasing computing and communications come into play, you’ll discover that it’s a very dangerous combination if you’re not fast on your feet. If people in an information business can’t adapt to this technology, can’t figure out what parts of it are very relevant to exponential expansion will be and succumb, and what parts will not, they’ll discover that they rapidly become obsolete. Conversely, this’ll offer tremendous opportunity for those who do realize those advantages, and wind up changing the status quo by offering new goods and services.

There’s a lot of people that are going to be out of luck, but there’s a lot of people that are going to be in luck as well. The author, the creative person that is creating information, is going to find better tools and a better way of getting to customers than ever before. Nerds like me, programmers, are going to have a terrific time, because this will be the age of the nerd, the golden age. The people involved in building the networks and the equipment are going to find there’s a $100 billion bill just in the US, that they’re going to have to rise to the challenge of going ahead and providing the services for. So although there’s going to be some tremendous problems, there’s going to be tremendous opportunities as well.

Now, in many ways, the technology we’re talking about is the greatest mass extinction event that the planet has seen in 65 million years. The old-fashioned stock market ticker, the typewriter, doing a spreadsheet with pencil and paper, the analog record turntable, they’re all extinct. I have five-year old twins and took them to Tower Bookstore the other day, and there’s a Tower Records next door, and explained how Tower Books was where they sold books and so one of them said, “Daddy, do they sell records at Tower Records?” I said, “No, it’s just an expression.” The record’s gone. You go in Tower Records, it’s nothing but CDs. The extinctions we’ve seen so far are just the barest tip of the iceberg, because the endangered species list is very large. Whether that’s things in the office, the way we file things, communicate them, or things in our own personal lives. There’s a tremendous number of things which are on the bridge of extinction, and that we’re going to see go by.

Same thing occurred in the early days of the personal computer. We saw many companies come and go. Even for companies that are still around and haven’t gone broke, they’ve usually gone through one or even three generations of computers to get where they are today. The earlier ones proving obsolete, not being able to grow fast enough, leading to new opportunities. Now, for all of this concern about extinction and so forth, there’s also a terrific amount of greed.

… worth. There’s also terrific amount of greed, people thinking, “Wow, this is the opportunity that we’re gonna get rich with.” Amusing fact is nobody knows where the profit will be, and I think they’ll try all kinds of variations and try and look for it. Is this going to be a question of metering things by the bit, or will it be a value-based charge, where some things are charged on the value they deliver not the communications class? Will it be driven by advertising, the way say radio and TV are today, or the way magazines and newspapers are to a lesser extent? Or is it going to be more like books or movies, that aren’t at all advertising driven?

It’s very difficult to figure those questions out, and when you see people that are plunking billions of dollars down, or just hundreds of millions a year in the case of my company, they’re doing so largely on faith, faith that somehow, they’re going to find a way through this puzzle. I don’t know what the answer’s going to be, but I think there are some initial conclusions you can draw.

The people who win at this are going to be the ones who have the most open, the most flexible business model, that allows the largest number of variations to occur. Without that, you’re lost. There’s a fascinating issue of time involved here. How long will this take? What’s going to occur along the way? Who wins? Who loses?

I like to use an example in entertainment here. The play, the theater, was in Shakespeare’s day a tremendous means of popular entertainment. Groups of troubadours and players would go around playing music, producing small theatrical performances. It was a populist medium. Well, the movie came along, and movies changed that to some degree. Movies were much cheaper to distribute, a little more expensive to make than a play, but you could play them many times over, more easily than moving the people around. It was cheaper, reached a larger audience.

Then television came, and of course, when television came, people predicted the death of the movies, just as when people saw movies, they predicted the death of theater. Well, we’ve gone from ordinary television, the network broadcast variety, to cable TV. It was a proliferation of new channels. We’ve gone from that to home video.

Now, fascinating thing is that every step along the way, we changed the distribution means for entertainment. At every step, we wound up having all kinds of people predicting, “Oh my god, everything else is going to die. The video cassette’s going to kill the movie business,” and of course, what happened at every step is the market got a lot bigger and all of the existing things up to that point continued. The VCR didn’t kill cable, cable didn’t kill broadcast, broadcast didn’t kill movies, and movies haven’t killed the play.

Now, when you go see a play today, it’s not as broad-scale mainstream populist form of entertainment as it was in Shakespeare’s day, when it was about the only thing going, but they still exist. In the same way, when people talk about newspapers dying, or traditional television will die, or this will die, I think they’re exaggerating enormously. What will happen is the market will expand, new things will come in, older forms of media may become relegated to smaller and smaller segments, much as the theater is today, but I think they’re still going to be there, because they satisfy very unique points in people’s information needs. There’s unique experiences involved, so it’s not going to happen overnight.

Another interesting question is, “What will the killer application be?” For the personal computer, the killer apps were things like word processors, spreadsheets, and databases. That’s really what drove the personal computer’s initial expansion. But, that isn’t the end of it. In fact, the reason that people buy PCs today has as much to do with multimedia titles, and games, and presentation graphics, and desktop publishing, things that didn’t exist at all in the early days of the PC industry.

The same sort of thing is true with this information highway. Don’t think of one killer app. Think of many killer apps. I like to use an example from the cable TV world. The killer app for cable in the early days was better TV reception. People at outlying areas got all the static. They could only get one or two channels, so people put the first cable systems in. But that’s not why 60% of American homes have cable today. They have cable because they want to get MTV. They want CNN, The Weather Channel, HBO, Discovery, all kinds of TV programming that you cannot get any other way. That was the killer app in the ’80s and ’90s. Has very little to do with the original one. The original one’s actually pretty boring.

But, that’s the nature of these systems. The start is always boring things, which are easy to accept. They’re a small step up, but that isn’t what makes it really popular. It becomes popular because of the new and unique means that you can do that you can’t get any other way. In the case of interactive TV, I think it’s pretty clear that the early applications will also be relatively boring, things like video on demand, online TV guides, et cetera. But that’s not what this system’s about.

What it’s really about is going to be new forms of interactive programming, things that we can only guess at today. They will be as remote to our thinking today as saying you were going to have MTV back in the late ’50s, early ’60s when the first cable systems came in. You know, you said, “Yeah, I’m going to have this program where all these little music shots are shown one after another.” People would have thought you were crazy. I think the same thing … I’m betting the same thing’s going to happen here.

Now, video on demand is a fascinating thing. It is both very exciting and very unexciting. The unexciting part is it’s really just storing a bunch of files, from a technical perspective. Although there are some challenges, they’re fairly limited. On the other hand, from a social perspective, there’s really a very large benefit, because what you wind up doing is breaking the constraint of opportunity cost.

Today, with prime time TV, we all share the same limited number of broadcast things. There’s only about 21 hours of prime time a week. There’s only a couple of channels, so there’s a very small number of slots. Those slots are hugely valuable, like a million dollars an hour is the opportunity cost. So, they only put on things that they think will appeal to the broadest possible audience, and they often make a bad decision.

Now, contrast that with the case of a bookstore. A bookstore at the airport may only have a couple dozen books, you know, the New York Times bestseller list. What if you restricted all bookstores to only have that amount? Well, I think that would be a tragedy two ways. First, you’d lose the richness of the world’s literature, but second, you’d wind up making books themselves less popular. Many of the books on the New York Times bestseller list weren’t built to be there. They were happy accidents. They exist because the barrier to entry is low.

Now, imagine if the publishing executives only had a couple of slots that they had to fill, and they had to make a decision for each and every book. I used to work with Stephen Hawking when I was a physicist, and he wrote a book called A Brief History of Time. Madonna wrote, created, a book called Sex, lots of pictures of her without any clothes on. So suppose you had some executive at a publishing company that had one slot left, and he had to sit there and say, “Well, it’s Madonna selling sex, this guy talking about the origin of the universe. I’ll go with the physicist.”

Not very likely, but it turns out that would have been the right decision, not just on some moral grounds or something, on a business basis. Sex sold less than a million copies. Stephen’s book has sold over five-and-a-half million copies, and the total revenue is much larger. So, it turns out that the knee-jerk response of pandering to the lowest common denominator would be the wrong decision, but it’d be very hard for someone to make that decision. That’s not just Brief History of Time. Bridges of Madison County, a zillion other books, have become popular by accident. They’re there because a publisher said, “Yeah, sure, I’ll try it,” and then it turned out to be very, very successful thereafter.

Video on demand has a chance to change that for video entertainment. That’s a very important thing. On the other hand, video on demand is a fairly boring thing. I like to call it the terminal emulator of the 1990s. Terminal emulators were a great way to use a personal computer in the early days. Very important application, one of the killer apps. If you had a mainframe or a mini computer, and you want to connect to it, a piece of software on a PC was a great way to do it. These days, it’s ridiculous. No one, or very few people, use them because the mainframes and minis themselves have shrunk in importance. And that’s really not what personal computing’s about.

I think the opportunity, from an entertainment perspective, is distributed programming, is people creating new kinds of applications that mix computing, communication, and data storage. It won’t just be video on demand. It’ll be far richer. If that wasn’t the case, Microsoft would not be betting the $200 million a year we’re betting in R&D in this area, because it wouldn’t have enough technical depth that a software company would be able to make the kind of difference that we think we’ll make if there is a rich distributed programming environment.

Distributing information, and the economics therein, is hugely important, but it’s also important to be able to create the information in the first place, because if you don’t have it, you can’t distribute it. And, this is an area that I think we’ll see an enormous amount of change in. If you are involved in video production, you see this huge number of expensive special purpose equipment, that very difficult to actually do. Specialists are required at many stages of the process.

In the future, this work is all going to be done on a small number of general purpose computers and pieces of software. It’s very much like the situation with desktop publishing. All those great typesetting machines that would melt the hot lead and cast the type, that’s all on a couple diskettes now. In the same way, creating audio, creating video, creating animations, that’ll be on a few diskettes in a couple years.

And the opportunities go beyond just creating video as we know it today, or animation, or multimedia as we know it today. We’ll have the opportunity to synthesize actors. It may sound a little bit crazy, but of course, that’s how the special effects in Terminator are done. That’s how various high-tech special effects happen. You can’t say, “Oh, order me up a tyrannosaurus rex from central casting,” or, “Excuse me, stuntman, I’d like you to melt through that wall.” All that stuff is done today with a synthetic actor, but over time, there’s no reason not to have that more broadly. If you can bring t-rex back to life, why not Elvis? It’s going to change, completely, not only the creative scope of what people can do, but the amount of access people get.

Authoring isn’t just about the Steven Spielbergs or the would-be Steven Spielbergs. It’s about everybody. Given the exponential increase in computing, in less than 10 years, a child’s toy will have the same power as the computers used in Jurassic Park. Say, “Mommy, look at my velociraptor. Look what I made it do.” this is really about allowing people to create information, literally at all scales, whether it’s children, businessmen, as well as the would-be auteurs creating their multimedia magnum opus.

The information highway, the whole name information highway, implies that this is something that’s about information, information businesses, communications, entertainment. It turns out many things are information businesses, one way or another. Consider the food chain of distributors that takes things from the original manufacturer, to warehouses, to local stores, or the food chain in the financial world. You deposit a dollar in the bank. The bank goes and takes that money, and aggregates it, and goes off to world financial markets to buy or sell various financial instruments, make loans, et cetera. There’s a huge food chain that is built up in each of these areas. Fundamentally, though, those are information businesses.

There’s been a big trend in retailing, where we’ve seen a move away from the small, local store, which is fed by the distributor, which is fed by the nationwide warehouse, which is fed by the manufacturer. Instead, we have outfits like Price Costco or Walmart, that create a big warehouse and have everybody come in, and they offer cut-rate prices and a lot of value on that basis.

Well, the information highway is going to be like Costco or Walmart on steroids. You’re going to find, instead of going to the big warehouse, why not browse it electronically? In fact, why warehouse things? Why not have manufacturing on demand, so that when you go ahead and ask for something, you order it, it’s created on the spot, and it creates an entire chain of messages flowing around to the various suppliers and their suppliers, so that people don’t have to maintain expensive inventories.

When you have something like the banking system, instead of depositing a dollar into my account and then having the bank go and aggregate that, and take it off to world financial markets, when I walk up to an ATM, and I go and I enter my dollar I’m going to deposit, that’s like a bid or an ask on the world financial market. It’s like, “Hey, I want to invest a dollar.” Why can’t there be competitive bidding for that, with people bidding, by and large who are not people, actually computer programs bidding it?

I think we’ll see a huge trend where middlemen, people whose only role has been either to warehouse things or to be a cog in the middle between the contact with the customer and the actual creation, are going to be squeezed, because fundamentally, middlemen are in the information business, and once we can all communicate directly, from the largest financial traders to the smallest individuals, from the people who want goods or services to the people who create it, once that communication happens directly, it completely restructures the way the whole role of what a middleman is.

It may take decades for this to really role out. I don’t think it’s going to cause the existing series of things to die overnight. I mean, after all, I can still go have a suit custom made if I really want, and in the future, you’ll say, “Yeah, I could go to a store to get a suit if I really wanted, but it’s so much cheaper and easier to have one manufactured on demand to my size.” It’s going to be a huge change for the world of retailing.

Sort of in a very general sense, man has been bound by physical proximity. We’ve been prisoners of the world’s geography, and the tyranny of geography has ruled our lives. Originally, this meant we could only communicate in a very small, local area. A series of things, transportation, the steam ship and railroads, shrank the world. After that, we talked about … There was enormous talk about telegraph, and telephone, and communication satellites shrinking the world. Well, this technology’s going to shrink the world even more. And not only does it expand on the existing modes of communications, it allows fundamentally new things.

I can call anyone in the world, if I know their phone number, even in Albania, and more or less it’s going to go through. But I can’t meet people with similar interests. I can’t form virtual communities. I can’t say, “Gee, anyone who’s out there, who’s interested in buying a microscope, let’s get together and talk about it,” or, “Everyone who’s a real physics nut, let’s communicate.” Unless you know their specific numbers, you can’t get to them. The new modes of communication and the ability to shrink the world is going to change the way we think of things. It’s going to remove the tyranny of geography from human society and change our society enormously in the result.

Politics is a terrific example of an information-oriented phenomena that I think will be utterly changed by this technology. We have a system of representative democracy here in the US, that’s based entirely on geography. It’s an example of the tyranny of geography. My representatives are elected from a specific geographic region. Doesn’t matter whether I have a lot on common with my neighbor or not, we share the same Congressman. On another level, we share the same Senators. In fact, you might ask why have representatives at all? The whole notion of representative democracy is based on the presumption that it’s not feasible to poll us and ask us all the time, that we have to have a representative that goes someplace.

But, when you think about it, I have more in common with a technology person who might live in Silicon Valley, or Route 128 area, or some other part of the country than I do with the people who live next door to me. In fact, I know my neighbors. I actually have almost nothing in common with them. If there’s a local issue, if they’re going to tear up the street and run a new sewer in, sure, the neighbors can get together on something, but why don’t we have something that allows the fundamental interest groups to be together? The ultimate minority is the individual.

I think a variety of things on the information highway will enable that. Grassroots political movements will find bulletin boards, electronic mail, and the new communications facilities to be invaluable in being able to organize themselves. The ability to directly poll people is going to change the way we think of representatives. It might not do away with representative democracy. I think there’s actually some very good reasons for it, but if in fact a representative can get an instant poll of all or a quorum of his constituents, every day if they need, it’s going to change the way they vote, change the way they react. Ultimately, I think this is all for the positive, because it means enabling and empowering people, people more than the current system. On the other hand, it’s hard to predict exactly what twists and changes are going to happen as a result.

Information is an enormously valuable thing. It’s been enormously valuable in both a positive and a negative sense. Information is used for great good. It’s also used for great evil. The government has become quite used to tapping our phones, and both whether an international basis or inside the country. The trouble is, those alligator clips you see James Bond putting on the phone lines, those don’t work on fiber optics. In fact, they don’t work well on packet-switched networks.

There’s some really fundamental questions that we’re going to come up with, because this technology has the capability of either enabling privacy at a much greater degree than we’ve ever seen or destroying it utterly. It’s hard to say which it will be. It’s clear that the discussions that have gone on so far are all over the map. On one hand, the FBI was behind legislation that would actually outlaw packet switching, had a variety of technical characteristics that said there had to be a single predictable path from any point to any other point so they could intercept it. There have been proposals to make cryptography illegal.

On the other hand, if you looked at the flip side and say, “If we don’t do something about privacy and encryption, how will we be able to use this?” We all have our car keys, right? Keys to our offices, keys to our house. You couldn’t build a society today that didn’t have the ability for us to have some privacy and to be able to protect our physical property somehow. Yet, that’s exactly what happens in a computer system today. There’s no way to prove who you are. There’s no way to sign your name. There’s no way to protect your property directly, unless you use cryptography.

I don’t know which way this’ll come out, but it’s going to be a fascinating debate. If we’re not careful on one hand, we’ll cripple or ruin this entire thing, by either not enabling the security to be there or to turn the information highway into the ultimate digital jackboot of big brother. On the other hand, I have sympathy for people on the other side who say, “My god, this is going to be how people plan terrorist attacks and all new forms of crime.” It’s a fascinating issue, and we’ll see how it turns out.

So, what will the future bring? I don’t know, and that’s part of the fun of it to be honest. I’m enormously hopeful, because information, the ability for us to communicate, to learn, to record, is what I think shows humans off at their best. The first information revolution, with Gutenberg, changed our lives for the better. I think this one will too, although the forms it takes, the variety, who will win and who will lose, that’s far harder to predict, but I think we all have our work cut out for us in figuring out how we do fit into the information highway, how to avoid being roadkill, and trying to realize the incredible possibilities that this enables. Thanks very much.