Interesting idea, but I suspect if you tried it you'd quickly run into practical limitations. I know people who've tried running build farms with "development board" hardware have found that the stress of 24x7 100% compute hits kernel bugs or overheating issues that 'normal use' doesn't, and I suspect mobile phones would be similar. The pace of phone evolution means you'd have an ongoing effort to get new kinds of devices into your 'cluster' (figuring out how to root them, identifying how to network them, rigging up some kind of custom rack-mounting hardware, etc). And there'd be a bigger turnover of 'replace expired device' than with fewer newer designed-for-server nodes.<p>The paper also suggests running a hypervisor on these, but I suspect you'd find that firmware/boot rom locks you out of EL2 (hypervisor mode) on most hardware.
My company is working on pocket-sized routers/firewalls/DNS servers.<p>These sit between the "smartphone" and the untrusted, e.g., public, wifi router, acting as a user-controlled gateway.<p>The user makes a one-time change to the settings on their smartphone to use the user-controlled gateway. This enables blocking ads and other unwanted outgoing connections without having to be physically at home or work, i.e. in a location where the user can "trust" the network.<p>The pocket-sized router runs open source software chosen and installed by the user. Users have a generous choice of operating systems, from Plan9 to BSD to Linux, as they do for the RPi. Baseband is either absent or physically disabled.<p>The main advantage I see to leveraging old "phones" for this is the power supply.<p>While I have seen small form factor routers made for travel, they are generally not rechargeable.
It seems like you'd encounter so many challenges in taking this path, although I think it's awesome that people are seriously looking into this.<p>One point that they article doesn't mention is that mobile devices don't have ECC RAM. I'm not very familiarized with the server space, but I thought that was pretty much a standard requirement? e.g. If you're providing IaaS, isn't the risk of falling prey to rowhammer attacks a serious concern?<p>What I'd love to see is something similar but for home users. Instead of continuing to push stuff into third-party services, you can hook up a mix of devices to run services and applications from home. Bring back the distributed internet dream! Most home services don't need tons of power or high availability. The biggest risk is probably with handling data backups, which you can easily solve by encrypting assets and pushing em up to some cloud provider.
The problem with these devices are that they are often locked down, and even unlocked/hacked open getting them to run anything like mainline Linux is hard-ish.
Well, the cost of a raspberry Pi for a home server is actually quite high when compared to using the busted or obsolete phone gathering dust in your drawer.<p>Since there are probably <i>millions</i> of these, why not recycle into cheap servers given the benefits of their low power use (and the possible ability to draw power to charge during cheap periods and run off battery during expensive ones).
In practice so far new CPUs were much more power efficient than older ones - so much that this approach makes little sense but .... it is worth noting that one of the reasons this <i>might</i> remotely work is on slide 7: Diminishing growth CPU performance / end of Moore's Law. I'm fairly optimistic about the 10nm node still providing a power saving boost but in the long run this sort of thinking could interesting.
I have at least 4 Android phones that have either a broken screen (but still usable) or are just too old to run a recent version of Android. Anybody knows how can I use them as computers or even servers? how to connect them to the network without using wifi? they obviously don't come with a network port.
Imagine those on the attic, connected to a internet connection, forming a mesh-net with all houses nearby- and activated if distributed computation power is needed.<p>Converting abundant power into computation.
At a previous company we had a 'farm' of test phones that we used for testing new versions of software. A large number of those test were 'stress testing' which would probably be analogous to the sort of load you would have it they were to be treated as a 'datacenter'.<p>Based on that I'm pretty sure that the biggest issue with managing the cluster would be the extremely high failure rate.<p>That's not to say that this won't work because I haven't done the maths, but it kid a unique challenge.
Skimming through the article, I think it does not include cost of labor. In a developed country, my guess is that the cost to have a human worker collect, clean, setup, connect, etc. all the 84 phones would be actually higher than buying a new server.<p>In third world countries, this could be a different story. I would be happy to send my old phones to Africa to be used at schools.
Add a factor of two to the material costs, because without ECC, on a large scale you will have errors, so you need to double check every result.
Now add some more to their TCO because they don't take MTBF into account.
Suddenly the numbers do not look good anymore.
I'm not sure they really did a good comparison here. Cost comparing an 8-socket system with 2 cpus installed buying new vs buying broken phones on ebay isn't really fair. If I were going to for cheap compute, I'd look for used (or otherwise discounted) chromeboxes. No distributed UPS, but small and gigabit ethernet, reasonably current intel processors for good IPC, not terribly large.
I saw a talk at my university by a prolific robotics engineer who explained that for small drones designed to work by swarming & collaborating, it is much more cost effective to use old Samsung Galaxy phones as the "brain" of a drone platform rather than integrating standalone CPU's. As far as I know they still use this as their primary design strategy.
I used to make a joke that phones will be the future in datacenters because nobody is designing ARM SoCs for servers and those that do made them perform worse than phones.<p>The paper completely ignores the lack of ECC, hardware reliability and lack of software support. Especially if you consider that the majority of mobile gpus don't stupport opencl or don't even have drivers available for modern kernels. You will need to support your software for each device type or rewrite your software as an android app. But since developer and sysadmin time is far more valuable you're better of just buying an x86 server.
This was a establishing plot point in Rudy Rucker's short story Hormiga Canyon, published in Asimov's August 2007 issue. He also included the benefit of voice recognition on any node :)<p>You can read it here:
<a href="http://www.rudyrucker.com/transrealbooks/completestories/#_Toc53" rel="nofollow">http://www.rudyrucker.com/transrealbooks/completestories/#_T...</a><p>(warning: Rudy's stories are... zany)
Energy efficiency relative to the same compute power for newer devices has been a major factor for not using old computers to perform computations. Maybe this principle carries over to low powered mobile devices, certainly some basic economic principles between computations per power consumption prevail at different orders of magnitude.