> $5 million dollar grant<p>For fiber? So I assume that they aren't going to be doing much digging, rather that they are going to string a few more lines alone already-existing paths.<p>>will make it possible to move data at speeds of 10 gigabits to 100 gigabits.<p>Wow.<p>In all seriousness: Kudos to whomever wrote the grant application. 5mil will keep people in work. But this project won't come anywhere near what is necessary to stream the data from 1% of the LHC's detectors. A private high-speed network is all well and good, but this isn't anything remarkable.
This article is so bad you can't even tell which grant out of many similar ones it probably is. About fifteen years ago, it was a problem for researchers of moving large data stores, and whether it's faster to do it with a network or with suitcases full of tapes. Who pays for the storage, where it's kept, and who is allowed to access it. These grants are usually denied, and so each group of researchers scrapes up their own little partial datastore and ships graduate students and postdocs back and forth from the uni to the Cern/DeSy/Fermilab/SLAC/wherever, sometimes with suitcases full of tapes on the return trips. I'm a little surprised that it's still a problem for them.
So are they talking about software or hardware?<p>They talk about LHC, but the innovation there was using a different file system: GPFS (<a href="http://iopscience.iop.org/1742-6596/219/7/072030" rel="nofollow">http://iopscience.iop.org/1742-6596/219/7/072030</a>) which meant that data is sharded, managed by age and transparently cached intelligently<p>Are they instead talking about using replacing TCP with something more designed for bulk data transfer?<p>Or are they talking about lighting up fibre with different transmitter pairs? (think dwdm x 10 <a href="http://www.webopedia.com/TERM/D/DWDM.html" rel="nofollow">http://www.webopedia.com/TERM/D/DWDM.html</a>)<p>for 5 million, I'd assume its software, if that the case, its pretty much just copy and paste what everyone else has been doing:<p><a href="http://filecatalyst.com/" rel="nofollow">http://filecatalyst.com/</a>
<a href="http://asperasoft.com/" rel="nofollow">http://asperasoft.com/</a><p>for opensource there is:
<a href="http://uftp-multicast.sourceforge.net/" rel="nofollow">http://uftp-multicast.sourceforge.net/</a>
<a href="http://tsunami-udp.sourceforge.net/" rel="nofollow">http://tsunami-udp.sourceforge.net/</a>
<a href="https://github.com/facebook/wdt" rel="nofollow">https://github.com/facebook/wdt</a><p>And a myriad of others. Multi stream TCP is fairly simple as the application doesn't have to deal with rate limiting or error correction.
Is this related to Internet2?<p><a href="http://www.internet2.edu/" rel="nofollow">http://www.internet2.edu/</a><p><a href="https://en.wikipedia.org/wiki/Internet2" rel="nofollow">https://en.wikipedia.org/wiki/Internet2</a><p>Edit: Maybe they are parallel projects? TFA says they have invested at about 100 campuses, but about 250 were already part of Internet2 in 2013.
I don't really see why this is NYT-worthy. State paper, sure, but NYT? This isn't particularly groundbreaking.<p>There are existing regional sci/edu networks doing 100G as well as field-specific networks (e.g. ESnet).
"FedEx is still faster than the Internet" <a href="https://what-if.xkcd.com/31/" rel="nofollow">https://what-if.xkcd.com/31/</a><p>Of course, the applications here are probably quite different -- this research grant may be more geared toward building a faster network to handle large amounts of streaming input.
> The challenge in moving large amounts of scientific data is that the open Internet is designed for transferring small amounts of data, like web pages<p>Isn't there FTP for large data transfers?
This is probably what this article is referring to.<p>NSF Gives Green Light to Pacific Research Platform- UC San Diego, UC Berkeley lead creation of West Coast big data freeway system.<p><a href="http://cenic.org/news/item/nsf-gives-green-light-to-pacific-research-platform-uc-san-diego-uc-berkeley" rel="nofollow">http://cenic.org/news/item/nsf-gives-green-light-to-pacific-...</a>
don't see how this is any different from the educational backbone infrastructure in the UK like JANET.<p>JANET has been around since (and before) I was at school 15 years ago, and has kept with the pace, more info at <a href="https://www.jisc.ac.uk/janet" rel="nofollow">https://www.jisc.ac.uk/janet</a>
Unfortunately, due to net neutrality laws, this can not be connected to the Internet. If these laws did not exist, "super networks" such as this could be defined in software and spun up at a moment's notice like some aws boxes. I'm guessing that's why the price tag here seems so low. All the hardware is already installed and ready to go, this project is just to set up a network that uses it to its full capacity.