Locking the Web Open: A Call for a Distributed Web

(Short form article, Short lecture, Long lecture, demo of a fraction of the idea of a distributed website (or paste this link in maelstrom))

Over the last 25 years, millions of people have poured creativity and knowledge into the World Wide Web. New features have been added and dramatic flaws have emerged based on the original simple design. I would like to suggest we could now build a new Web on top of the existing Web that secures what we want most out of an expressive communication tool without giving up its inclusiveness. I believe we can do something quite counter-intuitive: We can lock the Web open.

One of my heroes, Larry Lessig, famously said “Code is Law.” The way we code the web will determine the way we live online. So we need to bake our values into our code. Freedom of expression needs to be baked into our code. Privacy should be baked into our code. Universal access to all knowledge. But right now, those values are not embedded in the Web.
01-petabox
It turns out that the World Wide Web is quite fragile. But it is huge. At the Internet Archive we collect one billion pages a week. We now know that Web pages only last about 100 days on average before they change or disappear. They blink on and off in their servers.

02-blockedworldAnd the Web is massively accessible– unless you live in China. The Chinese government has blocked the Internet Archive, the New York Times, and other sites from its citizens. And other countries block their citizens’ access as well every once in a while. So the Web is not reliably accessible.

03-spyingAnd the Web isn’t private. People, corporations, countries can spy on what you are reading. And they do. We now know, thanks to Edward Snowden, that Wikileaks readers were selected for targeting by the National Security Agency and the UK’s equivalent just because those organizations could identify those Web browsers that visited the site and identify the people likely to be using those browsers. In the library world, we know how important it is to protect reader privacy. Rounding people up for the things that they’ve read has a long and dreadful history. So we need a Web that is better than it is now in order to protect reader privacy.

04-spendBut the Web is fun. The Web is so easy to use and inviting that millions of people are putting interesting things online; in many ways pouring a digital representation of their lives into the Web. New features are being invented and added into the technology because one does not need permission to create in this system. All in all, the openness of the Web has led to the participation of many.

05-distributedWe got one of the three things right. But we need a Web that is reliable, a Web that is private, while keeping the Web fun. I believe it is time to take that next step: I believe we can now build a Web reliable, private and fun all at the same time. To get these features, we need to build a “Distributed Web.”

Imagine “Distributed Web” sites that are as easy to setup and use as WordPress blogs, Wikimedia sites, or even Facebook pages, but have these properties. But how? First, a bit about what is meant by a “distributed system.”

06-escherContrast the current Web to the Internet—the network of pipes on top of which the World Wide Web sits. The Internet was designed so that if any one piece goes out, it will still function. If some of the routers that sort and transmit packets are knocked out, then the system is designed to automatically reroute the packets through the working parts of the system. While it is possible to knock out so much that you create a chokepoint in the Internet fabric, for most circumstances it is designed to survive hardware faults and slowdowns. Therefore, the Internet can be described as a “distributed system” because it routes around problems and automatically rebalances loads.

The Web is not distributed in this way. While different websites are located all over the world, in most cases, any particular website has only one physical location. Therefore, if the hardware in that particular location is down then no one can see that website. In this way, the Web is centralized: if someone controls the hardware of a website or the communication line to a website, then they control all the uses of that website.

In this way, the Internet is a truly distributed system, while the Web is not.

Distributed systems are typically more difficult to design than centralized ones. At a recent talk by Vint Cerf, sponsored by the California Academy of Sciences, Cerf said that he spent much of 1974 in an office with two other engineers working on the protocols to support a distributed Internet system, to make it such that there are no central points of control.

07-networkHere’s another way of thinking about distributed systems: take the Amazon Cloud. The Amazon Cloud is made up of computers in Amazon.com datacenters all over the world. The data stored in this cloud can be copied from computer to computer in these different places, avoiding machines that are not working, as well as getting the data closer to users and replicating it as it is increasingly used. This has turned out to be a great idea. What if we could make the next generation Web work like that, but across the entire Internet, like an enormous Amazon Cloud?

In part, it would be based on peer-to-peer technology—a system that isn’t dependent on a central host or the policies of one particular country. In a peer-to-peer model, those who are using the distributed Web are also providing some of the bandwidth and storage to run it.

Instead of one Web server per website we would have many. The more people or organizations that are involved in the distributed Web, the more redundant, safe, and fast it will become.

08-privacyAnd it also needs to be private—so no one knows what you are reading. The bits will be distributed—across the net—so no one can track the readers of a site from a single point or connection. Absolute privacy may be difficult to achieve, but we can make the next Web much more secure.

The next generation Web also needs a distributed authentication system without centralized usernames and passwords. That’s where encryption comes in to provide a robust but private identity system.

We’d also want to bring in some other features if we’re going to redo this Web.

09-memoryThis time the Web should have a memory. We would like to build in a form of versioning, so the Web is archived through time. The Web would no longer exist in a land of the perpetual present.

On library shelves, we have past editions of books, but on the Web, you don’t have past editions of websites. Everyday is a new day, unless you know to use the Internet Archive’s Wayback Machine, which may have copies of previous versions. Where the Wayback Machine was created after-the-fact to solve this problem of the current Web, in this next iteration we can build versions into the basic fabric of the Distributed Web to provide a history and reliability to our growing digital heritage.

We could also add a feature that has long been missing from the Web: easy mechanisms for readers to pay writers. With the support of easy payments, the Distributed Web could evolve richer business models than the current advertising and large-scale ecommerce systems.

Adding redundancy based on distributed copies, storing versions, and a payment system could reinforce the reliability and longevity of a new Web infrastructure.

Plus it needs to be fun—malleable enough to spur the imaginations of millions of inventors. This new Web could be an inviting system that welcomes people to share their stories and ideas, as well as be a technology platform that one can add to and change without having to ask permission– allowing technological change just for the fun of it.

10-componentsHow can we build this new Distributed Web? There have been many advances since the start of the Web in 1992 that will be helpful.

We have computers that are a thousand times faster. We have JavaScript that allows us to run sophisticated code in the browser. So now, many more people can help to build it.

Public key encryption systems were illegal to distribute in the early 90’s, but are now legal, so we can use them for authentication and privacy. With strong cryptography, communications can be made safe in transit and can be signed so that forgery is much more difficult.

We have Block Chain technology that enables the Bitcoin community to have a global database with no central point of control.

And we have virtual currencies such as Bitcoin, which could make micropayments work in a distributed environment. Many other projects have pushed the limits of distributed systems giving us building blocks for a Distributed Web.

11-spaceodysseyI’ve seen each of the necessary pieces work independently, but never pulled together into a new Web.

I suggest we need a bold goal, one that is understandable and achievable. Something that we might be able to rally around, and have multiple groups contribute to, in order to build an easy to use Distributed Web.

What about WordPress, but distributed? WordPress is a very popular toolkit that millions have used to build websites. My blog, brewster.kahle.org, for instance, is built on the open source WordPress software installed on a server at the Internet Archive. Free to use, and free to host, this toolkit enables anyone to select from a set of template designs and modify it to give it a unique look. Then the original creator can appoint users to play roles such as administrator, editor, or commenter. Those with these different privileges can, in turn, grant privileges to others as appropriate. And then the writers can post articles or images to its pages or change the look and feel of the site.

A WordPress website, traditionally, would then be hosted on a computer of the creator’s choice, either on WordPress.com, or on other sites offering hosting, or even on their own computer because the underlying software is available open source as well. This is where WordPress is not “distributed,” in the sense we are talking about earlier. If the organization hosting the site does not like the material, or it is blocked in another country, or goes out of business, then the website will not be available. Even major companies, such as Apple, Google, and Yahoo, have taken down whole systems hosting millions of user’s websites, often with little notice.

We would like to allow anyone to build a WordPress website–that has themes and different people with different roles, fun to go to and add to, free to create—which is also distributed in a way that is private and reliable.

We would want it to work in all browsers with no add-on’s or modifications. We would want to refer to a distributed website with a simple name like brewster.kahle.org and it needs to be fast.

We would need users to be able to log in without having to have many websites know their usernames and passwords, or have a central site, like Facebook or Google, control their online credentials. In other words, we need a distributed identity system.

Additionally, we would like to have payments work in the Distributed Web. We would like to enable anyone to pay anyone else, akin to leaving a tip, or paying a suggested amount for reading an article or watching a movie. Thus people could get paid for publishing on this Distributed Web.

In addition, we would want to have saved versions of websites, and dependable archives to make this distributed websites reliable.

How can we build this system?
12-bittorrent
Way to Build the Distributed Web: an Example

Please bear with me as I to try to argue that this is possible using an amalgam of existing or near-existing technologies.

A piece of this system could be a peer-to-peer system such as Bittorrent. Storing and retrieving files in a distributed way has been commonplace for years with Bittorrent. While downloading custom software is not ideal, it shows this function can be done and done for millions of people. Bittorrent is kind of magic, where typing a long number that is a unique identifier for a file or set of files will cause it to appear on your machine. Pieces of the desired file will come from other computers that had previously retrieved those files and therefore store them on their computers. In this way, the readers of files become the servers of those files. There are millions of users of Bittorrent sharing everything from commercial movies, to free software, to library materials. The Internet Archive, for instance, offers petabytes of files to the public using the Bittorrent protocol so that users have the option to retrieve files from the Internet Archive or from other users who might be closer.

Using Bittorrent as part of the Distributed Web to share the files is working in prototype form now. Bittorrent Incorporated’s peer-to-peer powered Web browser Maelstrom is now in alpha release. With this browser, a files of files can be distributed using Bittorrent. Using this early version, I demonstrated at a conference last month a static version of my blog, brewster.kahle.org being served by people around the Internet.

13-bkblogNotice in this image, that the Web address starts with bittorrent:// and then a long number. This is how the website was retrieved from the Bittorrent network.

Another system, IPFS, designed and implemented Juan Benet, is an open source and has some of the same peer-to-peer characteristics but has some added enhancements. Juan took my blog and in a few minutes put it into his system, showing that system is also working. One of the major features this system offers over Bittorrent is that updates to the blog can be discovered and distributed naturally through the system. Therefore, as people would add comments and posts to a blog, these can be retrieved without having to get a new identifier.

13.2-bkblogNotice in this case, the Web address refers to the localhost, meaning that it is retrieving the pages using a computer program running on my laptop, which is operating the peer-to-peer functionality.

Other distributed systems are in different stages of development, which will certainly be useful. Many of these systems are listed at the end of this paper.

Therefore the idea of storing and retrieving files that are part of a distributed website is now a reality in prototype form. But there are still some pieces missing.

14-javascriptBuilding Seamlessly on Top of the Existing Web

One feature that would greatly ease adoption would be to have distributed websites work seamlessly in reader’s browsers without any add-ons, plug-ins, or downloads–just click and see.

This is important because software on phones, tablets, and laptops are becoming more difficult to install without the permission of a company, such as Apple. Fortunately, it is easy to distribute JavaScript as part of Web pages, and this will likely be supported for a long time because it is important to sites such as Google Docs and Google Maps.

16-wordpressJavaScript running in users’ browsers as a kind of application platform is now possible and usable. I was surprised to find that JavaScript is now powerful enough to emulate older computers in the browser. For instance, you can now run a IBM PC emulator running MSDos 3.1 running a computer game just by clicking on a weblink to go to a webpage. The game Oregon Trail, or Prince of Persia, or old arcade games are now available on the Internet Archive and have been played by millions of people. The way this works is that others have made emulators of the underlying machines in the programming language C, and then that code cross-compiles it into JavaScript. So, when a user goes to Archive.org and clicks to run it– it downloads a JavaScript program that boots an emulator of an old IBM PC or an Apple 2 in the browser. Then it reads a floppy, in this case a virtual floppy, and then runs that program in the emulator so that you are basically experiencing that old computer interface. It was a strange mind twist for me to download and run a whole machine emulator in a browser. Since JavaScript is capable enough to do that, then we can build the mechanism we need for the Distributed Web in JavaScript.

To run a distributed system in the browser, we need one more feature. The code running in the browser must be able to connect to other browsers that are running the same system. Basically we need to make it so that a browser can contact another browser instead of going to a server. This is now achievable based on a new standard, web-RTC, that was created to allow video conferencing and multiplayer games.

With the underlying speed of modern machines, the maturity of a coding system like JavaScript, and the peer-to-peer features supported in browsers, we seem to have all the pieces we need to support a Distributed Web on top of the current Web without any downloads, plug-ins, or add-ons.

There is an additional advantage to building the Distributed Web in JavaScript: it can be changed and added to by many people independently. In fact, different websites might use different Distributed Web systems all interoperably on the Internet at one time. It does not require coordination or relationships with the browser manufacturers to make changes to how the Distributed Web works. Features can be added, subtracted, and experimented with in parallel, without permission. The Distributed Web could evolve much faster than current Web technologies and yet still be interoperable.

Distributed Websites that have Search Engines and Databases

Since WordPress sites have search and database functions for selecting posts from particular months and with particular tags, to be fully functional, we need our distributed websites to have these features as well. In the current Web, programs running on a server support these features, so that when the user types a few words into a search box, it is sent to the server, and then a program runs on the server to create the page that is then transmitted back to the browser. In the Distributed Web there are no servers, there are only static files that are retrieved from a peer-to-peer network. Luckily some of the files of the website can themselves be computer code in the form of JavaScript. All of the computation then happens in the browser based on those files.

Fortunately this is possible because a search engine and the index can be packaged as files that can be downloaded to a browser and run in the browser. This feature has been achieved in the demonstrations based on Bittorrent as well as IPFS mentioned before; the programmer, Marcel van der Peijl, used an open source tool js-search to take the pages of my blog site to create an index plus search engine in JavaScript. For my site, the resulting JavaScript page was one megabyte, which is large, but not too large for broadband computer users. To make this more usable, he only loads this code after the page the user requested has been displayed, so in most cases the user would not notice the delay.

This approach will work for most blogs, but maybe the largest ones will need more sophistication.

Therefore, we can have distributed websites that include dynamic elements such as search engines and databases.

Adding New Posts and Other Changes to a Distributed Website

A key feature of a WordPress site is adding comments or posts. This is trickier in a distributed setting than in centralized systems because updates have to be made in many places. In the WordPress application we do not need the website to be up-to-the-second for every reader, but we need to propagate changes quickly.

Bittorrent has a facility called “mutable torrents” which allows updates, but currently this requires a centralized facility to keep track of the newest version. This has the disadvantage of making the user contact a central server to find the most up-to-date version. This central server could be watched or controlled by a third party.

Another peer-to-peer file sharing system, IPFS, on the other hand, has a truly distributed facility for supporting updates and versions. IPFS is a very clever system that has some of the features of a Unix file system, but one that supports versions. How this works is some of the genius of this system. Since we have seen that a distributed WordPress site can be made out of files, which may contain images and text as well as code that can then be retrieved and played in the browser, a distributed file system could hold and transmit required files.

So there are solutions, even in a distributed way, to have millions of updates and not have to resort to central control or central notification that could impact our goal of protecting reader privacy.

17-waybackmachineThe Wayback Machine of the Distributed Web

The Wayback Machine is a free service of the Internet Archive that allows people to see past versions of websites. We do this by periodically visiting a website and downloading and storing the webpages being offered.

A Wayback Machine for the Distributed Web could store versions as they are created because it is easy to recognize what has changed and store it. This way, the Wayback Machine would have some significant advantages over the current one—it could always be up to date and it could help serve the current website and past versions to users without their even knowing it. This way the user would not need to go to the archive.org website to access the Wayback Machine. It would just start serving versions of the website on request, including the current version. If it did not have those files, then it could find them from other servers to add to the archive. Therefore, the Wayback Machine would be a form of host for the current version of the website, since it would participate in offering files to the readers. The Wayback Machine would therefore make the Distributed Web more reliable.

If someone referred to a past version of a website, and if the Wayback Machine had those files, it would serve those as well. In this way, the Wayback Machine would become more tightly integrated into the Distributed Web.

Many Wayback Machines could be run by many different organizations in a smooth way. As more groups participate, the more reliable and robust this system would become.

There is another significant advantage to the Wayback Machine application in the Distributed Web: it would archive and serve fully functional websites, not just snapshots of what it looked like through time. All of the functionality would be served, so its search and database functions would be supportable forever and in past versions. This way, the distributed websites would live on in time and space even if there were a disruption in hosting or authorship.

In this way, a library, such as the Internet Archive, could preserve and provide access to websites that are no longer maintained, or where the authors have moved on to other projects. This is similar to what libraries have done with professor’s research papers—offering enduring access to past works so that people can learn from them.

Therefore the Distributed Web would have a major advantage because it could be easily archived and served in a distributed and enduring way.

Fast performance

By having institutions such as the Internet Archive offering access to distributed websites, the users will get a more reliable service, but it could also help provide better performance. Since there are other organizations that are also motivated to provide fast and reliable access for their users, others could help replicate the data and make the Distributed Web more robust. Internet Service Providers (ISP’s), for example, want their users to have a good Web experience and would be likely to serve as a close and fast host for their users. This would also help save those companies on bandwidth bills because more of their traffic would be local. In this way, there can be cultural institutions as well as commercial organizations that have incentive to replicate parts of the Distributed Web, thus increasing reliability and performance for users.

Surveillance and Censorship

Since the Distributed Web would have users and repositories all over the world, both hosted by institutions and by other readers of the Distributed Web, some of the techniques for surveillance and censorship would become more difficult. For instance, the so-called Great Firewall of China blocks access to some websites outside of China by watching all traffic on its borders and filtering based on which websites are being accessed. Since a distributed website does not have a single location it would be more difficult to monitor or block its use. Furthermore, if one copy gets behind a firewall of this kind, then it can be replicated inside, making censorship more difficult.

The encryption used in this traffic may make it difficult to even know which files are being requested in the first place. Therefore, some of the existing systems of surveillance and censorship will not be as easy to conduct in the Distributed Web.

18-blockchainEasy Names of Distributed Websites

We also want easy-to-remember names for distributed websites. When the Internet was first designed, there were IP addresses that were strings of numbers such as 207.241.224.2. These were not easy to remember so a naming system was created called the Domain Name System (DNS), that allowed someone to remember names such as “archive.org” instead of an address. The Web, being built on the Internet, used these in its universal resource locators, such as http://archive.org.

In the Distributed Web, we have a similar problem with long, hard-to-remember numbers. In the implementations described above for both Bittorrent and IPFS, a webpage is an unique, incomprehensible string such as: 88f775eea02293b407e4b22c69d387cb9bbf50b8 or /ipfs/QmavE42xtK1VovJFVTVkCR5Jdf761QWtxmvak9Zx718TVr. It would be much more convenient if we had a string such as https://brewstersblog.arc.

The domain name service could be used for this purpose and would probably be a good starting point because it would leverage a large investment in technology and investments by society in regulating who gets what names. The Distributed Web could also incorporate new naming systems that would exist alongside the DNS to support new approaches to naming and the technologies to support them.

One distributed naming system that currently exists is called Namecoin, and it is an open source system built on a Bitcoin-like Blockchain, which is in itself a distributed system. To understand Namecoin, lets start with some of the characteristics of the Blockchain technology.

The Blockchain is a form of distributed database that is used to store the ledger under Bitcoin and similar systems. It is very clever in how it maintains consistency even when none of the participants trust each other. People submit “transactions” by signing them with their private cryptographic keys, and offer a financial tip to those who compete to operate the Blockchain consistency system; they are the so-called “miners.” The Blockchain then is a way to register transactions that everyone can see and everyone agrees to. In the case of Namecoin, the Blockchain is used to register a claim for a name and the long number with which it will be associated.

In this way, people can register a name and address pair in the Blockchain and others can look it up in a distributed manner. Unfortunately looking up a name is a time-consuming process, but at least it is certain who registers a name first. Increasing performance can be another task.

Another system that could be used for this is the Distributed Hash Table, or DHT, which is central to the way Bittorrent works. This is another distributed system for looking up a name.

So if this is done correctly, we can have easy-to-remember names resolve to distributed websites quickly, securely, and privately.

Furthermore, there could be registrars that charge for new names, and in return offer services such as fast servers and permanent archives. This could be a new business model that helps support the system.

To have a distributed naming system work in current browsers, without modification, we need a way to resolve the name to an address in JavaScript without contacting the server. Fortunately, there is a mechanism to do this using an anchor tag such as https://dweb.org/#brewstersblog.arc.

Therefore we can have a simple system for naming distributed websites without losing privacy or reliability.

Distributed Identity

To know who is allowed to update a blog, we need a system to register administrators and then to authenticate someone as being that person. That is achieved on current WordPress sites when a user creates an account with a username and password using a Web page. This is kept in a database on the server. If a similar system could be implemented with a distributed webpage that operates the database, we could make the system more secure and easier for people to use.

Another way current websites often work is one logs in using one’s Google, Facebook, or Twitter account information. This way a user does not have to give a password to many different sites, but it has the disadvantage that large corporations know a great deal about one’s behavior online.

A better system might be one that uses cryptography to allow users to create multiple account credentials and use these without necessarily tying them back to their persons. That way people would have control over who knows what about them, and if they wanted to walk away from an account, that would work as well.

This could use what is called public key encryption, which uses special math functions to create pairs of public and private keys. The private key is used to sign documents in such a way that anyone using the public key, which is publicly known, can verify that it was correctly signed. No one else can forge a document. Thus, if posts were signed on a Distributed Web, then the readers can verify that it is the particular user that has the authority to perform that action and the website never needs to know a user’s password or private keys.

19-bitcoinMaking Money by Publishing on the Distributed Web

Public-private key pairs are central to how Bitcoin works, and this fact can be useful. In Bitcoin, a public key is used as the account name such as 1KAHLE1taA85EXaVm1XuVYtbGp839MyEzB. With Bitcoin, people can create as many accounts as they want to. An account really has an effect only when someone has created a transaction using it, and thereby depositing Bitcoins into that account. Anyone can deposit money (Bitcoins) into an account, but only the holder of the private key can transfer the money out of the account to another account.

If the Distributed Web uses the same math function for creating public and private keys that Bitcoin does, then the Distributed Web’s identity system will be compatible with Bitcoin accounts. This has an interesting advantage that anyone could leave a tip for any writer on the Distributed Web because his public key would be his Bitcoin account. In this way, we could make it easy for payments, even very small ones, to be made in the Distributed Web.

I believe it would be even possible to use Bitcoin-like technology to require a payment before a reader can decode a file, say a movie. In this way, we may have a distributed way to sell digital files on the Internet without any central clearinghouse. It would still be possible to rip someone off by buying a file, decoding it, and then redistributing it, but this is true now. What would be different is that it would be easy to make micropayments and full purchases on the Distributed Web without third parties getting involved or taking a slice. Automated tipping could even be installed to try micropayments as a default behavior.

Locking the Web Open

In conclusion, through the last 25 years, people have poured their lives and dreams into the World Wide Web, yielding a library and communication tool that is unprecedented in scale. We can now build a stronger tool on top of the current Web to offer added reliability, privacy, and fun.

Our new Web would be reliable because it would be hosted in many places, and multiple versions. Also, people could even make money, so there could be extra incentive to publish in the Distributed Web.

It would be more private because it would be more difficult to monitor who is reading a particular website. Using cryptography for the identity system makes it less related to personal identity, so there is an ability to walk away without being personally targeted.

And it could be as fun as it is malleable and extendable. With no central entities to regulate the evolution of the Distributed Web, the possibilities are much broader.

Fortunately, the needed technologies are now available in JavaScript, Bitcoin, IPFS/Bittorrent, Namecoin, and others. We do not need to wait for Apple, Microsoft or Google to allow us to build this.

What we need to do now is bring together technologists, visionaries, and philanthropists to build such a system that has no central points of control. Building this as a truly open project could in itself be done in a distributed way, allowing many people and many projects to participate toward a shared goal of a Distributed Web.

20-distributedwebTogether we can lock the Web open.

We can make openness irrevocable.

We can bake the First Amendment into the code itself, for the benefit of all.

We can build this.

We can build it together.


Previous writings on this subject:

https://blog.archive.org/2012/10/22/siteless-website-possible-if-bittorrent-is-a-fileserver-without-a-server-what-about-a-website-without-a-site/

https://blog.archive.org/2012/02/15/want-to-help-build-a-distributed-web/

https://blog.archive.org/2015/02/11/locking-the-web-open-a-call-for-a-distributed-web/

http://thenextweb.com/insider/2015/04/10/locking-the-web-open-why-we-need-to-rethink-the-world-wide-web/

Distributed systems and communities: Maelstrom by Bittorrent, MaidSafe, Namecoin / Ethereum, Bitcoin for payments, Proof of Storage (blockchain), Oceanstore, I2p, IPFS, Storj, Peer5, Tahoe-LAFS, Twister, Peerjs / Web RTC, BitcoinJS, Redecentralize.org, get-d.net.

Posted in Uncategorized | 13 Comments

How about 3 billion people, all living the good life?

We talk about rolling back our pollution production to 1990 levels, we talk about solving world poverty, we talk about reducing water use.   Why don’t we talk about rolling back our population explosion?  What if we rolled back the human population to 1960’s level of 3 billion?  Might this not help us solve all of this?     1960 did not feel like an uninhabited world– what if we went back to that population level?  I am not suggesting a war or anything, just a decrease in the same way we increased– naturally, but with determination.

It is possible that the world could sustain 3 billion people living on earth, and all of these people living the good life.   Could we have the courage and the resolve to find a peaceful way to achieve this?   First of all it takes discussing it.

So, for a moment, lets explore what the earth would be like with modern technology, but with 3 billion people.    Lets explore what would happen if most or all of these 3 billion got to live “the good life,” or more exactly, got to live the life they wanted to live.     What is the good life?   Healthy food, clean water and air, good medical care, shelter, desirable professions and reasonable workdays.   Time off for family and fun.   I find the upper middle class life in the United States pretty desirable.   What if everyone had the opportunity and choice to live that way?  Could we make it work?

Reuter’s reports that the world’s middle class has not grown much since 1960, but the number and percentage of people in poverty has grown greatly.   What if we could have the 3 billion people on earth be middle class and above.   This would mean we would replace the haves-and-have-nots with the haves-and-have-mores.

With the current technology developments we soon will no longer need poor people to make the gadgets for the haves– we have robots coming online at a very rapid pace.   I visited the Tesla factory a month ago and it is a sea of robots.  There were lots of people there, but they were not doing the robot-like work– they were talking with each other and watching over the consoles.   Soon our phones will also be assembled by robots, especially if the price of labor went up.     People will work, but they will not have to work like robots.

We do not need poor people to toil in farms to feed us either.   Some people will like to farm, which is terrific, but we will not need to import people far from their homes, pay exploitative wages and give them few citizens rights, just to grow the food for the good life.    Already corn, wheat, and soy are tended mostly by machines.   Fruits and vegetables can also be tended by machines if there were not people willing to pay a reasonable wage to people that choose to do this.   With better technology we could illiminate most of the need for pesticides and destructive fertilizers.   With better sensors we don’t need to use as much water.   We can take care of the basics with machines, and leave the fun work for people, the work they choose to do.   What we need to do is not have to keep growing exponentially to feed a surging population, each expecting better and better food.

Other low paid jobs can be replaced and are being replaced by machines:   customer service, retail, taxi drivers, garbage removal, and the like.    Good riddance.   So people can live a good life without an underclass to serve them.   If people want to have those jobs, they can, and they might just if people are willing to pay them enough so they could have a good life.   The key is here, that they do not have to.

Why 3 billion?   It is a pretty arbitrary number, but might have some things going for it.    For instance, if we take the top-earning 3 billion of the current 7 billion people now, and look at the water, energy, land they use, that could be an approximation of what we might use if there were 3 billion people living the good life.    We need to develop ways to conserve water and energy, and I am assuming these will continue.

World energy consumption has tripled to quadrupled since 1960.   Since the population has about doubled since 1960, and the well-off use most of the energy, then we may not bring down our energy use much from current levels, but it would not rise above it.

World food production has grown since 1960, some unsustainably, and returning to 1960 level population means we would all be fed, and fed well.   And we must do this with less strain on the Earth, and by having a smaller population would help.

The earth might be able to sustain 3 billion people living the good life.

How would we reduce our population?

First we have to want to.    First it starts with a goal, and an explaination.  This essay is my attempt to at least bring up the subject.

Each country could strive to get their population back to where it was in 1960. Each would come up with different ideas on how.

Then we need to have a way to do it.   China implemented the ‘one child policy‘ in 1980 which has helped them bend their population curve.    Indonesia did a program called “2 is enough” which has helped somewhat.   We will need to continue these programs and spread them.   Waiting for infant mortality or disease to sweep through is not a good future, we need to commit ourselves to a one child policy until we get the population back to where we think it can be.

This requires popular will and political will.   Currently, most countries are not strong enough to even have the conversation much less implement it.   China is the exception to the rule, not the rule.    I think of what would happen to an American politician that proposed a one-child policy, or even a “two is enough” policy– probably nothing good.

So we need to start it at the popular levels, bring up the discussion.   Get the meme rolling.   Maybe 3 billion is not the right number, maybe it is 2 billion,  maybe it is 4– but lets have the discussion.

Starting now.

 

 

 

 

 

 

Posted in Food, Health | 1 Comment

Locking the Web Open, a Call for a Distributed Web

Presentation by Brewster Kahle, Internet Archive Digital Librarian at Ford Foundation NetGain gathering, — a call from 5 top foundations to think big about prospects for our digital future. (via archive.org )


Hi, I’m Brewster Kahle, Founder of the Internet Archive. For 25 years we’ve been building this fabulous thing—the Web. I want to talk to you today about how can we Lock the Web Open.


CodeLawOne of my heroes, Larry Lessig, famously said that “Code is Law.” The way we code the Web will determine the way we live online. So we need to bake our values into our code.

Freedom of expression needs to be baked into our code. Privacy should be baked into our code. Universal access to all knowledge. But right now, those values are not embedded in the Web.


IA_serversIt turns out that the World Wide Web is very fragile. But it is huge. At the Internet Archive we collect 1 billion pages a week. We now know that Web pages only last about 100 days on average before they change or disappear. They blink on and off in their servers.


map_China_RussiaAnd the Web is massively accessible, unless you live in China. The Chinese government has blocked the Internet Archive, the New York Times, and other sites from its citizens. And so do other countries every once in a while.


Censorship_flic.kr_p_gZZRQvSo the Web is not reliableAnd the Web isn’t private. People, corporations, countries can spy on what you are reading. And they do. We now know that Wikileaks readers were targeted by the NSA and the UK’s equivalent. We, in the library world, know the value of reader privacy.


It-is-FunBut the Web is fun. We got one of the three things right. So we need a Web that is Reliable, Private but is still Fun. I believe it is time to take that next step. And It’s within our reach.

Imagine “Distributed Web” sites that are as functional as Word Press blogs, Wikimedia sites, or even Facebook. But How?


Tubes_flic_kr_p_89HvvdContrast the current Web to the internet—the network of pipes that the World Wide Web sits on top of. The internet was designed so that if any one piece goes out, it will still function. The internet is a truly distributed system. What we need is a Next Generation Web; a truly distributed Web.


Peer2PeerHere’s a way of thinking about it: Take the Amazon Cloud. The Amazon Cloud works by distributing your data. Moving it from computer to computer—shifting machines in case things go down, getting it closer to users, and replicating it as it is used more. That’s a great idea. What if we could make the Next Generation Web work that, but across the entire internet, like an enormous Amazon Cloud?

In part, it would be based on Peer-to-peer technology—systems that aren’t dependent on a central host or the policies of one particular country. In peer-to-peer models, those who are using the distributed Web are also providing some of the bandwidth and storage to run it.

Instead of one web server per website we would have many. The more people or organizations that are involved in the distributed Web, the safer and faster it will become. The next generation Web also needs a distributed authentication system without centralized log-in and passwords. That’s where encryption comes in.


Private-e1423717177408And it also needs to be Private—so no one knows what you are reading. The bits will be distributed—across the Net—so no one can track you from a central portal.


MemoryAnd this time the Web should have a memory. We’d build in a form of versioning, so the Web is archived thru time. The Web would no longer exist in a land of the perpetual present.

Plus it still needs to be Fun—malleable enough spur the imaginations of a millions of inventors. How do we know that it can work? There have been many advances since the birth of the Web in 1992.


Blockchain_JavaWe have computers that are 1000 times faster. We have JAVAScript that allows us to run sophisticated code in the browser. So now readers of the distributed web could help build it. Public key encryption is now legal, so we can use it for authentication and privacy. And we have Block Chain technology that enables the Bitcoin community to have a global database with no central point of control.


NewWebI’ve seen each of these pieces work independently, but never pulled together into a new Web. That is what I am challenging us to do.

Funders, and leaders, and visionaries– This can be a Big Deal. And it’s not being done yet! By understanding where we are headed, we can pave the path.


DistributedWebLarry Lessig’s equation was Code = Law. We could bake the First Amendment into the code of a next generation Web.

We can lock the web open.
Making openness irrevocable.
We can build this.
We can do it together.


Delivered February 11, 2015 at the Ford Foundation-hosted gathering: NetGain, Working Together for a Stronger Digital Society

Posted in Uncategorized | 3 Comments

The Myth of the Self-Made Man, or maybe better said: Thank You.

My dear sons, Caslon and Logan–

Those that proclaim themelves “Self Made”, I am convinced, are either ignorant or marketers. Ignorant of the many people that help us along, push us along, often without recognition. Or marketers that are purposefully simplifying the story so as to sell something, in this case, themselves.

One proverb is “we make our own luck”, which, like all cliches has some truth to it but misses the point: which is that circumstances and people outside of our control make up the vast majority of the “luck” that lands us where we are. Gender, economic cycle, “being at the right place at the right time”, an unseen opportunity becoming available because someone else offended yet another person at just the right time, at the least the right time for you.

For myself, I feel lucky. And in my case because I was lucky. Born white, male, upper-middle class, to a functional family, in a country whose economy was soaring (because the rest of the world had been bombed flat 15 years before)– I was off to a easy start. Being a math geek at the fortunate time when computers were on the rise let me learn from a high school friend, Rob Bedichek, who hand-wired his own computer out of logic chips and switches. Then having the combined miracle of supportive teachers and an unseen college admissions officer that let a barely top-20%-of-his-class guy into MIT, gave me a leg up that I have not been able to live down completely.

Top career advice came a few times, one from Professor Gerry Sussman when I called to ask him to hire me into his lab so I could learn to make chips to protect the privacy of all phonecallers. He said, “I don’t know you, why should I hire you? Just come in, start working, and if you are any good then someone will hire you.” Little rough, kicked me in the butt, and his lab delivered– best advice ever.

Another was from Marvin Minsky and Danny Hillis: try to do your big idea, maybe in steps, but with the big idea always in mind– you may not achieve it, but achieving a goal is overrated. It is the journey and your fellow travelers that are the point. As Laurie Anderson pointed out from Moby Dick– it might not turn out so well for those that get their whale.

Outdated implicit advice from my parents: keep your head down. They lived this advice based on their growing up in the McCarthy era United States when trying new things could get you blacklisted, and did tank many people’s possible careers. But I found being bold to have worked well in my era, more ‘open’ the better, more giving, more straightforward the better. People then understood what I wanted to do, and then could more easily help. (This approach reached a logical conclusion, but tragically, with Aaron Swartz, someone who lived a completely open-source life, and worked for the public good, but in his case, he was crushed by institutions around him and was driven to suicide.)

Why do I get a beautiful view overlooking the San Francisco Bay this February morning? Why do I get to go sailing with a beautiful wife most weekends? Thank you.

So what is the point? I have been invested in, I have been made by my communities. At most we can be worth investing in, and appreciative of the support we receive at the same time as we invest in and help others along their paths. Few will win the lottery, but we can take satisfaction, and happiness, in the successes around us.

Thank you, all.

Posted in Education | Tagged , | 2 Comments

When I First Realized I was being Lied to…

When I first realized I was being lied to, systematically lied to, and by the government, I felt upset, then felt duped, and it started me thinking– how far does this lying go?

It was in the beginning of college and it was the government’s messages about the Vietnam war and marijuana.    I found out that it was not just a matter of point of view, of older-wiser people teaching lessons I was resisting.   No, it was flat out lies.   Things they knew were wrong, but were saying were true.    Lies.     It was hard to take.

This would have been 1978, and I was 18 years old when it felt like a light was turned on in the room.    It may sound like I was naive or unusually sheltered but I don’t think I was that abnormal.    I was taught that police were to be feared and respected, their tactics might be harsh, but it was a grownup world and their motivations were mature.

But it was the stinging realization that these lies made a big difference in people’s lives, my life, that made the lies stabbed me, then made me doubt, question, and distrust the powerful.   Shifting from thinking of power rather than maturity.

The drug messages in the 70’s was pervasive…   if you start with marijuana you will end up on heroin and in a gutter.   Marijuana made you crazy and would lead to birth defects.    It was the movies we were shown, it was on the tv news and in newspapers, it was in underground books that were circulated like “Go Ask Alice” that were lies all the way through.   And that was not all.

The Vietnam police action (it was not a war, they said) was necessary to stop a domino strategy of communist world domination.    Resisting was unamerican, and ungrateful for not fulfilling our social contract– not doing the Right Thing.    Vietnam was a puppet of China, we were always about to win, we don’t want our boys to have died in vein.   As I found it– I was being lied to.  Systematically, knowingly, and with grave consequences.

I started to, as they say, “Question Authority”, and I found more and more holes in the logic and rottenness in the motivations.    I looked for answers in philosophy classes, maybe they could help me figure out if I should register for the draft– I took a class on social contracts studying Hobbes Locke and Rousseau, but the Leviathan seemed to be a justification, and an a-historic justification for absolute monarchs.     I studied western religions at a divinity school, but those approaches did not seem to pass the logic tests.    Only Zen Buddhist practice seemed to avoid obvious shortfalls, but only gave general guidance.   Maybe that was the best we could do.

Reading real scientific studies of the effects of drugs on the brain was a way to find very different answers from those in Time magazine and the evening news that purported to be built on the same evidence.   Scientific writing, and the scientists behind them, while limited, seemed to at least not just be making things up to justify the agenda of the powerful.

Maybe this is just growing up, but I don’t think it has to be this way.   We do not need to have to teach our children from a young age that they are being consistently lied to by powerful entities, that they are being sold things that are bad for them, that they should fear the police and not believe them because the police are encouraged to lie to get confessions.

We can do better than this, we can build and live in societies where we do not have to constantly question secret motives.    We can dis-empower the institutional structures that profit through deception.    Large corporations and governments seem to have incentives to take shortcuts and deceive.   Maybe we could replace their functions with responsive and local organizations that are transparent and straightforward.   Invest in those we trust and teach our children that they do not need to accept deceit as “just the way it is.”

As a kid, being caught lying was a big deal that came with consequences.    Lets have that apply to grownups too.

 

Posted in Education, Uncategorized | Comments Off on When I First Realized I was being Lied to…

Tower of Babel Story Celebrating Monoculture or Diversity?

towerofbabel_finalI started to question what I was taught about the Tower of Babel story, and it turns out I was not the only one.   Oxford Biblical Studies Online analyzes it and comes up a non-traditional and, from my point of view, much more positive view on it.   I will get to their argument after mine.

I was taught that the building the tower of Babel showed human hubris– people trying to build to the heavens.  Then God punished them by giving them many languages so that they could not get anything great done.   Lesson–  having many languages is punishment.   One language makes a coherent, productive society, just don’t use it to try to challenge God.

Where others saw this as positive lesson of coherence and humility, I saw this as celebrating monoculture, celebrating racism.   In the historical context, I think of Babylon (Babel) as a cosmopolitan city, and not dominated by the Hebrews.    Hebrews were outsiders, even a rebel movement, and could use this argument against a polyglot, diverse environment.    So this point of view was understandable, but not a positive lesson for me:  be all the same and we will succeed, being different is a punishment to reduce productivity.    This is not only a racist argument, it does not reflect productive societies.

But what if this is not what is meant by the Tower of Babel story at all?    Well an Oxford project seems to agree.

The King James Version is only 9 lines long, and please please just take a minute and read it.   It is shockingly short, and is worth it.

The Tower of Babel
1 And the whole earth was of one language, and of one speech.
2 And it came to pass, as they journeyed from the east, that they found a plain in the land of Shinar; and they dwelt there.
3 And they said one to another, Go to, let us make brick, and burn them thoroughly. And they had brick for stone, and slime had they for mortar.
4 And they said, Go to, let us build us a city, and a tower, whose top may reach unto heaven; and let us make us a name, lest we be scattered abroad upon the face of the whole earth.
5 And the LORD came down to see the city and the tower, which the children of men builded.
6 And the LORD said, Behold, the people is one, and they have all one language; and this they begin to do: and now nothing will be restrained from them, which they have imagined to do.
7 Go to, let us go down, and there confound their language, that they may not understand one another’s speech.
8 So the LORD scattered them abroad from thence upon the face of all the earth: and they left off to build the city.
9 Therefore is the name of it called Babel; because the LORD did there confound the language of all the earth: and from thence did the LORD scatter them abroad upon the face of all the earth.

The Oxford Biblical Studies Online project has a very interesting reading of both the text and people’s readings of it over the years.

“The Oxford Biblical Studies Online provides a comprehensive resource for the study of the Bible and biblical history. The integration of authoritative scholarly texts and reference works with tools that provide ease of research into the background, context, and issues related to the Bible make Oxford Biblical Studies Online a valuable resource not only for college students, scholars, and clergy, but also anyone in need of an authoritative, ecumenical, and up-to-date resource.”

This analysis shows that over the years, that “hubris” interpretation was emphasized and expanded on through translations.

But a reading of the text seems to them to yield a different interpretation altogether: one that people gathered together into a city in fear, “lest we be scattered abroad upon the face of the whole earth.”(verse 4).   So it was fear not hubris.  God’s reaction was to observe they only had one language (v5), and gave them more languages and “thence did the LORD scatter them abroad upon the face of all the earth”(v9)    So, was this punishment or trying to get “Genesis” to happen– which was spreading people all over the world?     The Lord was certainly contradicting what those people wanted: a cloister of homogeneity into something else, but was it a punishment or His intention to get the human race to spread out?

So the Lord’s reactions could be interpreted quite differently if you thought He saw hubris and wanted to crush them, or if He saw fear of dealing with the world and wanted to help people spread out around the world.     If you thought your early ancestors were the original ones, and then their descendents created the many languages and peoples of the earth, then the latter interpretation where the Lord gave people the nudge they needed to spread out as a good interpretation.     And one celebrating diversity.

If on the other hand, one thinks the rightful language did not get its due, then having a wrathful God punish the hubris innate to those building productive cities would make sense.   This interpretation celebrates monoculturalism.

What I like about this musing is that it may fit with historical trends in that the text may have been interpreted as celebrating diversity long ago when the Hebrews thought of themselves as starting things off and leading to all other things.    But maybe as time went on the priority shifted to defining a people in a populated environment, and an environment of adversity, reinforcing traditions and language that will lead to longevity of the group.

Personally, I will take a “diversity is good” interpretation, and it is been interesting to find others have as well.

 

 

 

 

 

 

 

 

 

http://global.oup.com/obso/focus/focus_on_towerbabel/

Posted in Uncategorized | Comments Off on Tower of Babel Story Celebrating Monoculture or Diversity?

Are ‘Road-Trains’ Coming?

I think it could be a very good thing, and one that could evolve incrementally from existing technologies.

I imagine ‘Road-Trains’ as made up of cars on highways that automatically adapt to what cars in front of it are doing, and can follow more safely, and at closer distances, than when people are more in control.

A Road-Train is a column of cars flying down the highway, at very high speed and close together.    Because each car could ‘draft’ the other cars, as bicycle teams ride in close formation do, the would get much improved gas mileage because only the first car would face the inefficiency of breaking the wind that is now the blight of every car on the road.

Drivers would have to explicitly engage and disengage from the ‘train’, but their cars would signal the other cars, and their drivers, that this is what they are interested in doing, so wider spaces could be created and signals to help the driver match speed.

These actions could be made as safe as how we presently change lanes on a highway, which we do all the time, but in this case, it would happen much more rarely since once you are in a Road-Train, then your car can help you drive.

We could think of merging into a Road-Train like a highway entrance ramp: we have to speed up, match speed, and merge in.    Alerting the drivers in the cars ahead and behind that something is going on, so they should pay more attention.

Google said that people engaging manual controls unpredictably was a problem:  “Google said its testing had suggested it was safer to remove conventional controls altogether because the results of a human having to take over suddenly and unexpectedly were unpredictable and potentially dangerous.”  Therefore the Road-Train would make the engaging of manual controls a predictable and supported feature, and one that would only be needed when engaging and disengaging from a Road-Train.

Leaving a Road-Train would be done by a signal, like a turn signal, and the cars would again alert each other, create a wider space, and let the driver safely engage manual control and leave.

Cars have almost all the technology needed–   They know how to stay in a lane.    They now apply brakes automatically to avoid collisions.  Cars can talk to each other to alert them.     So this Road-Train is almost ready from technology perspective.   And a Road-Train would leverage Google’s experience to minimize the number of times people have to take manual control or release control of the car.

The result could be much greater gas efficiency because of drafting.   Further, we could safely increase the density of cars on the road.   Since these cars would be both closer together and going faster, the road throughput would be much higher.     Therefore we would have much fewer traffic jams, or thought of another way, our traffic jams would be going at full speed.

Another advantage would be the time we spend in our cars could be more productive because we would not need to be as attentive to the road– we could do many of the things people are already doing, but safely– texting, reading, talking.   It could be like having a custom railway car, going very fast, and delivery you right to your door.

Car-pool lanes could be dedicated Road-Train lanes for all the right reasons: better fuel efficiency, and higher density.   It would reward those that have Road-Train-ready cars, but not require a cut-over from non-equiped cars.   Thus a smooth transition could be made and also give those of us that like manual controls a way to say in control, when we want to be.

I would much prefer being in a Road-Train for a long driver, or even commuting.

All aboard the Road-Train!

 

 

 

Posted in Health, Uncategorized | Tagged , | 3 Comments

“Change the World”, “Do No Evil”, what about something else– “Amplify Good” or ?

Maybe we should question mottoes and think of new ones.

Steve Jobs would often say he was “Changing the World.”   Google’s founders repeated a motto of “Do No Evil.”   Mottos may seem trite, but they form part of the mythology of how we see our roles in our world.

“Changing the World” is a forceful, and frankly arrogant, statement.   Corporations can magnify the power of a few leaders, mass production repeats one design, the tools of computers made dominant by companies shape how end users work and play.   So yes, an Apple Computer can “Change the World”.      But should one company decide how to “change the world”?

“Do no evil” seems to me to be setting the bar a little low.    Should we strive to “do no evil” or doing something actively good?    While I understand not doing evil may not have been the the goal of the Google’s founders, but rather an gut check that employees should perform on their projects.    But it became a motto in the population outside, and a test they are held to by outsiders.    It has become part of the myth of purpose put upon Google.    And in that, setting not doing evil as a goal seems like a pretty low vision.

So, what could new mottoes be?   I would hope that it would reflect values we want to encourage.   I would suggest the values of being inclusive, selfless, positive, flexible, respectful, growth oriented could be emphasized.    The possibilities suddenly feels wide open.

Maybe I am growing older, but I am finding I maybe I am not the center of the world where I should be trusted to “change the world” even given the opportunity.   More of my role is to teach, foster, and reinforce those actions I have found worthwhile.

I imagine all of us have been trying to find these themes in our actions, and guiding ideas.    I have no conclusion here, but a puzzle that might be worth thinking about in the car or the shower.

A few nominations to start:   “Amplify good”  “Path for a Fruitful Life”.   Any ideas?

 

Posted in Education, Uncategorized | 7 Comments

Foundation Housing Talk at SoCap 2014

I had the opportunity to introduce Foundation Housing at the Social Capital Conference yesterday.    Fun.  This is a conference for people trying to figure out how to invest in things that have humanitarian benefits.     Foundation Housing may be a new idea, but seems to fit.   Here is the video:

 

Posted in Housing | Comments Off on Foundation Housing Talk at SoCap 2014

Creating Free and Open Societies

 

Creating Free and Open Societies

From Free and Open Source Software, to

Non-profit High-tech Organizations, to

Debt-free Affordable Housing

 Brewster Kahle,  the club,  May 12, 2014,   cc-by    v3 msword pdf

tl;dr summary:  Public Licenses Rule (in software and creative works);  but if extended further, Public Licenses could be better law structure to create Free and Open Societies within restrictive law systems.

Most of us participate in a few of different societies at the same time, say, our family and our country. If thought of this way, a society has its own rules and patterns, its own form of participation and success, its own myths as to how it is supposed to work. The American society is a political and economic one that is sold as democratic and based on equal opportunity.

Unfortunately, growing evidence is showing that American society is barely democratic and economic advancement is denied to most that work hard. A recent paper from Princeton evaluated of over 1700 federal government policy decisions made in the last 30 years and found “economic elites and organized groups representing business interests have substantial independent impacts on U.S. government policy, while average citizens and mass-based interest groups have little or no independent influence.”   Therefore, according to this research, the vast majority of the population has little or no say in how the federal government makes policy decisions. Similarly discouraging is the economic analysis over the last 30 years that found that the increase in American wealth went to only the wealthiest 1% of the population, with the other 99% essentially staying even. Therefore, there has not been equal opportunity for economic success in the United States for the vast majority of the population.

To try to correct this divergence between the ideals of our society and reality, some are working on political solutions. But, interestingly, others are building alternative societies within the broader society. Societies that operate with different rules that seem more fair, more participatory, more generally beneficial. These societies do not have the power of law to help regulate them, so they have been creative in how to make them function.

This paper will explore some of these approaches to build societies with more participation and more opportunities for individual success. These societies are still embedded in a broader legal regime, one that does not share those values, but nevertheless can be used to protect the new societies’ rules if done cleverly. Specifically, the laws that create “property interests” in land and ideas can be used to defend subsets of people that want to share anyway. In this way, there have been attempts to create a balance for the public interest by creating sub-societies that operate with different rules and for different ends. While difficult to set up, we will see some have been successful, and if the government policy continues to dictated by small minorities, it may be worth building more systems based on this model.

An early example, again from the United States, came after the government declared most of the lands of the newly expanded United States to be private property in the 19th century, this did not sit very well with many. In a reaction to what was seen as “over-properitization” of land, the United States government started a National Park System in 1917 to somewhat counter this enclosure. Many individuals went on to donate or sell lands to the government to be protected for the common good as parks. But a new system of land control for the common good evolved, I imagine, as a reaction to the actions of government not serving that goal as well as desired:  it is the growth of the Nature Conservancy founded in 1951, which is a non-profit that buys and accepts donations of land in order to protect it. It is interesting that it now protects 119 million acres of land, which is substantial considering the US National Parks system contains 85 million acres.

The donors were voluntarily limiting their ownership privileges, or just giving them up, in order to build a public commons that served the public interest. Thus a non-governmental approach was created, in this case leveraging a centralized organization, to benefit the broad public.

When another problem arose, this time with copyright, a distributed solution was created to fantastic effect: Free and Open Source Software. This voluntary releasing of property interests has gone on to be the foundation of many successful industries. This paper will discuss how this came about and how it might be used to go further to create other systems that benefit many more people than the system created by our government and corporations. While this approach is difficult to implement it seems to be a novel technique to counter the expansion of property interests.

Free Software Ideas

Richard Stallman created the idea of Free Software in 1984 as a reaction to a problem of image001‘propertization’ brought about by a major change in copyright law. When this propertization was applied to software it caused major problems, which lead Stallman to create a new and powerful tool– the Free Software License, or GNU General Public License. Using licenses in a new way, he sought to create an environment that encouraged sharing, as a kind of voluntary law structure when laws had caused unwanted division. This creative use of licenses, he called them “public licenses,” was a major innovation that built a decentralized law structure that people could voluntarily join.

Thus a society, within but different from the larger governing and corporate society, was deemed better by enough people that it went on to make difference in the important industries of high technology. In some important way, people opted out of the laws passed by the government to make a new, more free and open society, by using public licenses to govern themselves. Taking lawmaking into their own hands in this way was only necessary because the laws passed by the government were not serving many people very well.

Public License

Usually licenses are between two people or entities, but a “public license” is between an owner and the general public. So it is quite an unusual license, and one that may not have had a precedent before Richard Stallman. Also, a public license sticks. In other words, once the public license is legally affixed to a copyrighted work, or potentially other owned things, then then it does not expire and the public can continue to use the licensed work. A feature of works under the GNU General Public License is that they can be used in combination with other works under that License. In this way, a piece of software under the GPL can be linked and mixed with other GPL code, where it can not be mixed with proprietary code. This caused Microsoft to worry that this license could be a threat to its ongoing business, and as we have seen as GNU/Linux has expanded, they had reason to worry.

A Public License is different from a protocol, such as the World Wide Web protocols or the Internet Protocols in that a License carries responsibilities. Richard Stallman, and his lawyer Eben Moglen, thought about this carefully to try to avoid the works being used to undermine the freedoms he set out to ensure. The Internet protocols do not have any such responsibilities engendered in their use, so there was nothing to guarantee ongoing “net neutrality” from corporations or from government. Therefore the Public License has a defense structure built in.

Public Licenses are also different from non-profit ownership, such as the Nature Conservancy, in that it is decentralized and does not have a provision for after-the-fact changes. Some have objected to the Nature Conservancy selling some lands, for instance, in order to protect others. The distributed nature of the public license does not have change provisions, which has its own issues, but the control stays with the original owner.

Public licenses can evolve, and indeed the GNU General Public License is now on version 3, but any owner of a copyrighted work is free to use an older version of the license. The debates and process around the version changes are illustrative of how mature the whole process has become, but it is beyond the scope of this paper.

Creation of Free Software

How the GNU General Public License and Free Software came about might lets us see how we might apply this idea more broadly. (While I did not participate in the founding of the Free Software movement, I worked with many of the founders at that time and engaged with the technologies involved.)

In 1976, the US copyright law changed from opt-in to opt-out. This means that before 1976 to get copyright protection for a work– you had to put a little C in a circle on something you have written and send a copy to the Library of Congress. After 1976, you got copyright protection whether you wanted it or not for anything expressed in a fixed form. This is an astonishing change. Suddenly everything written had an owner, became property, and could not be copied without permission until the authors were 50 years dead. The first major problem with this new system came with software.

Starting in 1973, a group of staff members and graduate students at MIT wrote a new operating system called the Lisp Machine operating system. As a user of this, I can stay it was like nothing before and very powerful.

We were excited about building something significant and new. The source code in this project was not signed by anyone because that would be seen as arrogant–  someone may have started a file with computer code in it as an initial version, but everyone was free to work on the software and make it better. It was a project in common, it was a commons.

But MIT, I imagine emboldened by the new copyright law, seemed to say to itself, if everything is owned by someone, then MIT owns this operating system, and therefore it can sell it to companies. Which it did, licensing it to Symbolics Corporation in 1979. This “forked” the code, by making a copy that would be worked on independently. Even though there was still a copy that could be developed at MIT and another company called Lisp Machines Incorporated, it meant that the developments at Symbolics were proprietary and separate. Users had to choose between versions with became incompatible.

Richard Stallman tried to keep the MIT version at feature parity, but it proved too hard, even for a prodigious programmer like Richard Stallman. The open version lost. To Stallman’s credit, he could see the future, and it was dark.

Symbolics grew and later died, and with it the Lisp Machine operating system. The system that could have been a major step forward effectively became the sole property of a single company and with that company it died. The shame is not that a company died– they all do, it is that it brought down a community effort that was more important than a company or some licensing fees for MIT.

Richard decided to stop working on the Lisp Machine operating system and build a new system that would not have this problem. This new system would be built on a legal framework that would not allow the software to be forked into a proprietary version. The innovation was to create the GNU General Public License and build software using it. The idea of using this license to rebuild an old and simple operating system, Unix, was just convenience, and in fact a technical sacrifice, in order to be able to get software going again without the property problem that brought down the Lisp Machine operating system.

Again, I was not actively in the early GNU project, but was at MIT at the time and helped in small ways. What I learned from this was how a creative use of licenses could be used to solve a societal problem brought about because of bad laws.

The problem with the law was the “over-propertization” which caused division and loss. Over-propertization made it difficult to work together, made it difficult to share, it created conflict when none was needed.

Richard Stallman’s solution was ingenious:  use copyright law to try to recreate some of what we had before the law was massively expanded. He created a legal territory, a commons, in which people could write code that could be shared with one another, and would continue to be shared with one another. Larry Lessig called it “Share and share alike”. It works by saying in effect:  I will share this work with everyone, and therefore anyone is free to use it for whatever they please. If anyone adds to my work, then they have to share the result of your work in the same way. Almost childlike in its simplicity, it seemed to express what people intended, and in fact what people were used to doing. It took massive effort by Richard Stallman to get this going, not just by writing fantastic code but by spending years on the road explaining and recruiting people to use this system.

Interestingly, he did this by creating a distributed system, with no centralized points of control. Unlike the Nature Conservancy, there was no one entity that controlled all the software. Rather it was an “opt-in” system of people used this “General Public License” a license for re-use requiring no coordination between parties.

And Free Software took off. By the mid 1990’s a usable Unix operating system, GNU / Linux, was used in production. The new Free Software license and then joined by related but different Open Source Licenses were used as the glue that allowed thousands of programmers and companies to cooperate to build the complex operating system. Without these Public Licenses, all code would be proprietary, and would require one-to-one licenses that would not scale to the requirements of cooperation need.

Key ideas in Free software were that there were no rents and no hoarding– everyone was free to use the software and build on it. If one did build on free software, then the results are shared in the same way. In this way, the commons would grow and stay protected from restrictions forever.

In general, Free and Open Source Software was a better system for many applications maybe because there can be many participants and many beneficiaries rather than just a few proprietary owners.

Stallman’s licensing system is the best ‘hack’ I have seen (‘hack’ in the sense used in the technical community):  it allows people to evolve a working system, where working is determined by the system working better than alternatives and attracting adherents. If there is proof of work in one system then it gains value and momentum. By people participated more in one system rather than other systems, then the better system will come to be the most useful. It is a form of voting with participation and work. Other systems, evolving at the same time, would succeed or not by working better. It is a form of competition that lives underneath and within the rule of law. So these systems are all perfectly legal, so it is an evolution from within. It is an evolution that became necessary because the rule of law did not benefit enough people; so many opted for different rules that would produce a more productive and attractive environment.

The first hurdle the Free Software idea had to accomplish was to convince people they wanted to give up some rights in their work, or as Larry Lessig put it: “Some Rights Reserved.”   This was not easy, but it gained momentum. Then, interestingly, there came to be other licensing arrangements, often collectively called “Open Source Licenses.” These became so numerous that a non-profit organization was formed, the Open Source Initiative, to adjudicate them. So there started to be competition even within this realm. The “GNU Public License” is now on version 3, after having succeeding version 2 after months of open debate. Thus an ecosystem of rule systems was born.

The law could have followed the same path and reflected the improvements tested by these system, and in which case, would have made these Herculean efforts unnecessary, but they haven’t. And as the study cited at the beginning of this paper has shown, the laws are responding to forces of the elite and and special interests rather than popular will.

While the Free Software movement could be an antidote to current problems in governance, the level of effort to make this type of system operate should not be underestimated, it would help society evolve much more rapidly and universally if there were feedback from these experiments back into the legislature, but there is little evidence of this yet.

The reason Richard Stallman’s “GNU General Public License” was needed in the first place was because the change in copyright law put everything written put under the burden and restrictions of copyright. A simple way to decrease the power of this new idea would be revoke some of the massive expansion of “property rights” created in 1976. Since Congress and the Supreme Court does not seem to be going in this direction at this point, we will see more approaches like Free Software, and indeed we have.

Creative Commons Licensecreative-commons-logo-640-80

The next step in the evolution of public licenses was the Creative Commons License by Larry Lessig, who said he expanded on the ideas of GNU project. These public licenses applied the approach to movies, music, and written works. Some projects, like Wikipedia, had previously used the GNU license for textual works, but it did not fit very well, so when the Creative Commons Licenses were created, they adopted the version that is closest to the GNU one: called Creative Commons Share and Share Alike.

Before the Creative Commons Licenses were available, the Internet Archive struggled with the legal fees and agreements with those that wanted to add materials to the Internet Archive for free distribution. When these licenses were released, this became easier because the copyright holder could select which one they wanted, and there was no legal negotiation. A public license has proven understandable to the copyright holder and to the users of the materials from the web site. This has lead to over 1.5 million works being uploaded to the Internet Archive using these licenses.

Another major use of Creative Commons has been in academic publishing. As commercial entities acquired and raised fees to view academic papers, a reaction entity, the Public Library of Science was funded by Foundations to build an alternative. The papers published through this system, now over 80,000 a year, are all Creative Commons licensed and may be freely read. The new Digital Public Library of America is another example where it requires all contributions of library catalog records be Creative Commons licensed so that the holdings can be analyzed at once rather than having to track licensing agreements on a per-record basis.

While successful in reflecting many people’s interests, governments have not changed their laws. This means that the vast majority of works live under the default case– fully copyrighted. To help increase the responsiveness of government to public interests, the founder of Creative Commons Larry Lessig has gone on to try to concentrate on eliminating the corrupting influences of campaign contributions in politics to strike at the root cause of the problem.

Defensive Patent License

Patent laws create a monopoly on use of ideas for a period of time, which has lead to image005severe problems and expenses for companies, problems such as patent trolls, vague patents, and expensive prosecutions[1].

I proposed applying the GNU Public License idea to patents in 2009 which has been pursued by several law professors and is being launched this year as the Defensive Patent License.

This system has the same interlocking nature of the GNU License in that anyone that registers their patents as defensive can then use all other defensive patents royalty free. All other uses are in the existing, litigation-filled environment.

While too early to tell if this will be widely adopted, it is being taken seriously by small and large corporations because of the flaws in the current patent system.

High Tech non-profit organizations

I have spent so long on public licenses because of what I see happened next, which is to apply these ideas to organizations, specifically, the evolving High-Tech non-profit sector. Organizations such as Wikimedia Foundation, Mozilla Foundation, the Internet Archive, and the Electronic Frontier Foundation, I contend, are interesting as a parallel, or maybe even a next step in the evolution of the Free Software ideas of Richard Stallman. If we think, for a moment, of them in this way, we might see even further steps we can go in this direction.

High-tech non-profit organizations are also a reaction to a problem:  Over-propertization in corporations. Corporations, with their seeming obsessions with copyrights, patents, contracts, mergers and acquisitions, and non-disclosure agreements, are not a good fit for long-term commitments need to foster the sharing environment of free software and its developers.

So a solution was created:  non-profit organizations that adopted free or open source principles as core values and sometimes as legal requirements.

Free Software Administrative Organizations

admin-groups

Some of the first of these were mostly administrative, such as the Free Software Foundation, again of Richard Stallman, established in 1985. The Linux Foundation (2007), the Apache Foundation (1999), Open Source Initiative (1998), and the Document Foundation (2010). These organizations help administer the licenses that are then used by a widely distributed set of programmers that might be in many different companies, universities, as well as individuals acting on their own behalf. By being non-profit organizations, they could not be bought, and they were not “property” as corporations are. They have no shareholders, rather they have a mission and a board to support that mission. By being mission-driven and regulated under US law, they are incorporated to serve the public.

Interestingly, sometimes Free and Open Source projects that start out in companies find a non-profit a better structure. For instance an open-source competitor to Microsoft Word, OpenOffice, started in a company as proprietary software, and was acquired by Sun Microsystems in 1999 which then open-sourced it (2000) and it became very popular. When Sun Microcomputer was purchased by Oracle Corporation (2010), a group forked the code (2010) into a non-profit named the Document Foundation in case it would not continue developing it and renamed it Libreoffice and publicly asked Oracle to open up. Oracle did so in 2011 by moving the project to the Apache Foundation. So the project has come to live in 2 different non-profits being a bit confusing for the time being. As a side note, it is interesting that Free Software does sometimes get forked which creates competitors. This is a dramatic action that is not taken lightly but has happened and provides a legal and legitimate way to make sure work continues in the public interest. Competition is supported even in the commons.

Support and Advocacy Non-Profits

Another stage in the evolution was the formation of non-profit organizations to support support-groupsFree Software and related initiatives. Since many of these projects did not have an institutional structure, with employees and a legal department, the Electronic Frontier Foundation, founded in 1990, has been a pro-bono law firm for those trying to assert freedoms in the digital world. EFF has grown to be a major law firm and advocacy organization that owns its own building and is building an endowment, yet it gives away its core product– law advice and prosecutions. So in a way, the EFF is an “free software”-like organization in that it gives away its product, yet it has grown to be successful and important.

Interesting to me is how long these have lasted as compared to the generally short life of most high tech corporations. As an anecdote, at the 20th birthday party for the EFF in 2010, I asked a fellow board member, Mitch Kapor (who started Lotus Development Corporation (1982), famous for early spreadsheets), “have you ever been involved in something that has lasted 20 years?”, and he wittily replied “my marriage, I hope” and then recounted that the important and successful Lotus Development Corporation lasted only a few years before it was bought and became unrecognizable.

Other support organizations have been created such as Public Knowledge (2001) as a Washington DC public interest group to represent these interests to Congress and regulatory organizations.

Creative Commons (2001) started as a public license creating non-profit, but evolved into an advocacy organization as well, running projects and outreach. While similar to the Free Software Foundation, it successfully raised money to run projects with different communities to create new licenses and software all over the world. Bits of Freedom (2000), in Europe, has helped protect and expand these ideas, as well and the Open Knowledge Foundation (2004) in the UK.

Hybrid open-source and for-profit structures were tried such as WAIS, Incorporated (1992) was based around an open source system distributed in 1991. AOL later acquired this company in 1995, but the open source software continued somewhat. Cygnus Solutions (1989) offered commercial support for Free Software and was sold to Red Hat ten years laterRed Hat, itself, has been a corporation that has supported free software to varying degrees by creating a distribution of Linux. For that matter Ubuntu is also a hybrid that produces a free distribution of Linux but also sells services and support. WordPress by Automattic, another for-profit, maintains a popular open source software version of its core software that helped it outcompete an more proprietary system called Live Journal.

One important distinction between for-profit and non-profit organizations is that non-profits are very difficult to acquire, which may be because there is much less monetary incentive for leaders of a non-profit to merge with other organizations. This has lead to a continuation of the mission, which is the primary purpose of non-profits under US law.

High-Tech Non-Profits Go Operational

Some new high-tech non-profits are full-fledged infrastructure organizations, such as operational-groupsWikipedia and the Internet Archive. While there have long been member-based or subscription-based services such as the ISP’s during the NSFnet era or shared infrastructure groups like JSTOR, those groups offered services only to their subscribers and do not adopt Public Licenses. The new high-tech non-profits are more like open source software in that they give away their main product to anyone.

Where the open source administration and advocacy organizations did not need large staffs these operational organizations are growing staffs and facilities.

Mozilla Foundation, for instance, gets some money for itself through grants and donations, but its fully owned sub-company Mozilla Corporation currently earns about $300 million a year from Google basically for ads. With this income it employs over 1000 people in what appears to operate like a for-profit company, with for-profit salaries and structure, but it gives away its product.

The Public Library of Science is an interesting example because it competes with for-profit scientific journal publishers like Elsevier. This publisher operates on a writer-pays model, where those submitting a paper will pay $1000-$2000 and the resulting work is open and publicly available even in bulk. They have convincing statistics for the advantages to authors of publishing in open access journals in terms of citations and reuse over publishing with for-profit companies that limit who can read the articles. Having started with generous grants from the Moore Foundation and the Herb and Marion Sandler, they now more than cover their expenses and enjoy a $45 million budget.

Wikipedia raises about $50 million a year, and climbing, through user donations and grants. Its model has proven successful in keeping the free encyclopedia growing and heavily accessed, making it the 6th most popular website of all websites.

The Internet Archive gets most of its $12 million dollars a year from libraries to digitize books and collect web-pages that anyone can then access for free. But it has been evolving some new systems I will touch on in a moment.

So these high-tech non-profits organizations, as well as others such as Public.Resource.Org and the Internet Software Consortium, have made business models based on giving away their core product, and thrive while doing so. The models are evolving, but they have generally done well both through recession and boom times. These organizations are creating systems that have many beneficiaries, with low barriers to participation, and support decentralized structures. I find this encouraging towards thinking we might have a way to make free and open societies without having to wait for government or corporations to change their priorities.

The next question is to see if this approach can be used more widely, outside of the high-tech sphere, to be used to solve other problems our property focused countries have created.

Free and Open Ecosystem?

As I have been trying to make the Internet Archive financially sustainable, we are pursuing both raising more money and spending less. Spending less has invited some creative solutions that leverage the idea of public licenses.

We first focused on reducing external, non-salary costs. The Internet Archive has already bought its own buildings so that it does not have to pay rent, which has been a substantial cost savings, especially on datacenter costs. Also, the Internet Archive lowered its electricity use by about 50% by not using air conditioning and using external air to cool our machines. This unconventional approach has proven successful and is now being adopted by others. The Internet Archive went on to try to heat our own buildings using our computer heat to provide additional savings. Leveraging low cost machines, low cost hard drives, and then using these machines for multiple tasks have kept down the millions of dollars we spend each year on technology.

But we seem to be running out of ways to reduce our external costs. The Internet Archive now spends about 70% of its budget on staff costs. While the largest expense for some of our employees is private school fees for their children, the biggest cost for most of our employees is housing at between 30-60% of their take-home pay. Indeed some of our staff that scans books spend 60% of their income on rent for their family. It turns out that this range is not just a San Francisco phenomena, but a nationwide average. Diving another level deeper to find out why rents are so high, and it turns out it is debt service. About two thirds of rent people pay go to paying the mortgage on the apartment building, or the mortgage on their home as few of our employees own their homes outright.

So approximately 25% of the money we raise at the Internet Archive goes to service debt on our employee’s housing. Another way to look at it is that if we had debt-free housing we would not have to raise this money and might even have a higher standard of living.

Free and Open Housing?

To try to come up with a “Free and Open” approach to housing, we are trying something image013we are calling Foundation Housing. We set up a separate non-profit that bought an apartment building using donated funds to create affordable housing for non-profit workers. In some way it is taking an endowment out of the banking system and endowing housing for our employees. In this way, a permanent benefit can be afforded to the staff of the Internet Archive by leveraging an upfront donation. This system turns out to even work if there were not the up-front donation, but rather just a down payment that would allow an apartment building to be purchased with a mortgage. Since market-based rents generally cover mortgage, if we would not put subsequent debt on the building, the mortgage would be paid off in 30 years. As market-based rents tend to increase and fixed payments do not, most buildings are effectively paid off in 12 to 15 years. Therefore, if one does not have as much money up front, but is patient, there can be housing that is transitioned to be debt-free, and therefore can be used to supply affordable housing to these non-profit employees.

Key to this approach is to keep new debt from being put on the buildings. This is where the Public Licenses idea comes in. The GNU Public Licenses, for instance, binds a software program to be forever free. If we could create a Foundation Public License, then it would similarly be forever bound to not take on new debt. Properly constructed this would take away the incentive to sell or encumber properties with debt and therefore transition apartment buildings to enduringly debt-free housing.

Another feature of the Public License is who the building is to serve and how. The Foundation House will offer the employees of selected non-profits to take up residence and only charge the base cost of the building. Base costs are made up of upkeep, taxes, and insurance. The cost of debt would not be borne on those employees therefore making the fee for living there only one third of market rent. Also, those base costs do not fluctuate as rapidly as the market does, so the housing can a stable base for operating the non-profit.

If this model were to do as well as Free and Open Source Software did, then maybe it could expand to be 5% of all the housing units in a county. If we had easy-to-adopt Public Licenses and had the incentives aligned, then people may voluntarily contribute the properties or funding to pay the down payments in this transition.

The interlocking nature of Public Licenses could also benefit Foundation Housing by facilitating a pool of houses that could be interchanged between non-profits. In this way, employees might have the choice of many different Foundation Houses, even if they were started to serve different non-profits. In this way, this could encourage mixing of employees from different non-profits thus building a wider community.

Free and Open Housing could be made by using this model of Foundation Housing that would not take rent, over the base costs (more like a condominium fee) and have no owners that can profit from the selling the property out from underneath the tenants.[2]

Hopefully would we build a secure and enduring housing base that would be permanently affordable to those that work in the non-profit sector.

In case this idea works, we thought we would need to get financing for these buildings. As I found when I was starting a cooperative house in the Boston area in the 1980’s, the banks were not inclined to lend to such organizations.

Free and Open Banking?credit-union-staff-and-board-smaller

Building a lending institution that would support our “free and open” environment seemed like a natural step. The Internet Archive founded the non-profit and member run Internet Credit Union in 2012 for this purpose.

Credit Unions are tightly regulated, and in fact, they are mostly being shut down by the federal regulator, at about 300 a year, while only one or two are allowed to start in a year. Experiencing the tight strictures put on credit unions explains this death-to-birth ratio.image017

It may take years before our new credit union is allowed to lend money to help the housing situation, but the experiment has started. How it might expand to fulfill a Free software-like role is still to be explored. For now the credit union is trying to navigate the regulatory environment.

Conclusion

All in all, there seem to be ways to leverage Public Licenses to build non-governmental and non-corporate systems in order to create free and open structures that produce our goods and services. Pioneered in reaction to the “over-properterization” of software in the early 1980’s, it has come to be a way to dealing with similar trends for books, music, video, journal articles. Applying it further to patents, and even organizational structures as an alternative to the joint stock company structures is being tested. If we can extended it to housing, food, and even education, we might be able to create more free and open societies that serve many of the basic needs in an way that reflects the interest of a majority of people.

This creative approach to building organizations and societies that operates with different laws implemented with public licenses offers hope that we can live more productive and sharing lives.



[1] In one lawsuit that I testified in, Apple v Samsung, there were 45 full time lawyers just on the Samsung side on just the US version of the lawsuit.

[2] Rules being explored for Foundation Houses:

Owned by a special purpose non-profit (independent of the non-profits it serves),

Have 2 choices: market based rent & debt-free cost-based rent (like a condo fee),

Benefit particular non-profits, if extra money, then benefit making more Foundation Houses.

Posted in Housing, Uncategorized | Tagged , , , , | 10 Comments