OK, so I was recently reading about a lawsuit against wikipedia. This is a bit troubling because if they win, wikipedia could get shut down. Typically this is pretty easy to do because the plaintiff takes the order to the ISP and down goes wikipedia. This would be a bad thing.

So I started thinking about ways to get around this. If there was a decentralized type of web server that mirrored the data on client machines, it would be impossible to shut down.

The idea is that you have a tracker site(s) that store a copy of the site and info on the participating nodes. Each node dedicated a certain amount of disk space that is claimed by the client program (say 100MB). This bucket is used to store encrypted, signed copies of content fragments. The idea is that each node might have 10% of a piece of content, not the entire amount. And the content is encrypted and signed so the clients don't know what they have and can't change it.

When a client requests a page, they actually end up with 10 HTTP requests to assemble and display the web page/content.

One cool thing about this is that bandwidth would not be consumed by the server any more, so really popular sites wouldn't have to pay as much for bandwidth. I don't suffer from this problem, but maybe one day. If a site peaks, then that could be a couple thousand dollars worth of bandwidth. With P2PWeb, the server bandwidth is low as the clients make requests from multiple participating nodes.

I'll need to write a couple pieces of software:
  • Server tracker software
  • Client node software
  • Client browser plug-in for P2PWeb


So the major benefits will be bandwidth savings + decentralization + privacy.

I guess I have something to work on over the winter break. I plan on writing the client in Java and the server in Java. It might suck, but it's the easiest language I can write in.