<i>"... Just ordered a new laptop and I've decided to try to push as much of my data online as I can to make life more flexible. I'm just curious to see who here does this and what do you use if so? ..."</i><p>WHY<p>I do. Any post I make I have squared away. This allows me to make my site the definitive collection of my data. It also allows google to index it. I have control over my own content and if for some reason a third party site wants to exert control I still have my stuff. Unlike slashdot who didn't allow any tools to save posts. Hence I lost from about 1996 to 2002's worth of comments. [0]<p>HOW<p>There are 2 ways of looking at this. You can either generate you <i>"stuff"</i> from a single point [0] at your machine, save it and pump it out OR use the web apps as clients and suck the data back via RSS, Atom, JSON etc.<p>I've been doing a bit of both. Pushing stuff from my blog engine after I've cached it and now I'm beginning to suck up the various websites I frequent. So for pushing out I have:<p>- OUT: flickr (blog, tags, images) , twitter (snippets), hackerid (hackernews data), links (various links I save including links to hackernews & export to delicious)<p>- IN: hackernews (all posts every 15m, friends),<p>- IN TODO: wordy (words I use), spock (tags), colourlovers (colours), librarything (my library), amazon (new books), lastfm (what I'm currently listening to), delicious (new links I find), twitter (friends), flickr (friends, processed images, text, tags)<p>Now as you can see that's a lot of data. Some of the things I'm finding:<p>- It's easier to push than pull if you want an accurate copy because you save before you export<p>- pulling data means you don't have to write firstly the interface to capture the data and simply call RSS<p>- not every site has an API or good RSS feed<p>- Linking data together is not easy except by time though you could try to match by friend (ie: friend is on twitter, flickr, hackernews)<p>- displaying the data effectively is difficult simply because of the volume and complexity of it. A good example of how to do this is <a href="http://friendfeed.com" rel="nofollow">http://friendfeed.com</a> Clear, simple and pretty much allows for good reading.<p>I'm now at a stumbling block with a templating engine I'm using and so I'm pretty keen to just extract the data as Atom, RSS and JSON as individual feeds or a mashed feed by date and write a Javascript based website to avoid having to deal with heavy weight blog engines. Let the data go free and see how people use it.<p>The key thing to realise is when you are compiling your data timestamp it at DB level (if you are using one) ISO1606 format, maybe add a tag layer over the top so you can get the benefit of tagging across data layers.<p>It's turning out to be an interesting project.<p>[0] <a href="http://goonmail.customer.netspace.net.au/2005DEC131709.html" rel="nofollow">http://goonmail.customer.netspace.net.au/2005DEC131709.html</a>