[PD] Super computer made of legos and Raspberry Pi computers

Andy Farnell padawan12 at obiwannabe.co.uk
Mon Sep 17 11:11:21 CEST 2012


Hearing it from the front line is really interesting Chuck. I am
a little envious at the excitement a project like that must 
produce. 

Do you know of Joe Deken and the "suitcase supercomputer" 
project? He is a big Pd proponent (and friend of Miller I believe)
and they are also looking at R-Pi boards for their next
portable cluster (I'm probably telling you stuff you already
know)

best
Andy



On Sun, Sep 16, 2012 at 10:26:56PM -0500, Charles Henry wrote:
> On Sun, Sep 16, 2012 at 3:26 PM, Andy Farnell
> <padawan12 at obiwannabe.co.uk> wrote:
> > On Sun, Sep 16, 2012 at 10:24:45AM -0300, Alexandre Torres Porres wrote:
> >> now my question is;
> >>
> >> spending 4k to build a Pi supercomputer can give you more power and
> >> possibilities than with a top of the line MAC for example (which will cost
> >> just as much, and be a quad core 2.7 intel i7, 1.6GHz bus, 16GB Ram).
> >
> >
> > We keep using the word 'supercomputer', and maybe a bit of
> > perspective would help clarify matters of scale.
> ...
> 
> > A supercomputer is, by definition, that which is on the cutting edge of
> > feasible research. Most supercomputers are in a single location and not
> > distributed or opportunistic, they usually have a building dedicated to
> > them and a power supply suitable for a small town of a thousand homes
> > (a few MW). A team of full time staff are needed to run them. They cost a
> > few hundred million to build and a few tens of millions per year to operate.
> > Current supercomputers are measured in tens of Peta FLOPS, ten to a hundred
> > times more powerful than the equivalent mainframe, and are primarily
> > used for scientific modelling.
> 
> Yeah, but when I tell people what I do, do you think I say "cluster
> computing" or symmetric multiprocessing or CUDA applications engineer?
>  No, I tell them I work with "supercomputers"--It's not a term for
> practitioners, since there's more specific things to say, ... and it
> keeps people from thinking I'm going to waste time talking about nerdy
> shit that I don't want to talk about anyway :)
> 
> > The current guise of the 'mainframe' is what we would now see as a
> > Data Center, a floor of an industrial unit, probably much like
> > your ISP or hosting company with many rows of racked indepenedent
> > units that can be linked into various cluster configurations
> > for virtual services, network presence and data storage.
> > Aggregate CPU power in the region of 10 TFLOP to 0.5 PFLOP
> 
> At the moment, I'm (the engineer) putting together the proposal for a
> grant for GPU computing resources (for the researchers and
> scientists).  We're looking to spend about $750,000 on hardware that
> will perform about 100 TFLOPS.  Mostly it will be made up of--whatever
> NVIDIA Tesla is most cost/power effective--in servers that will hold 4
> GPUs.  Altogether, we hope this fills up 5-10 racks (in our shiny new
> energy efficient data center with 32 racks, that the f'ing fire
> marshall won't let us into for another month, when we've been
> postponed since June anyway).
> 
> > Supercomputers are still supercomputers, by definition they are
> > beyond wildest imagination and schoolboy fantasies unless
> > you happen to be a scientist who gets to work with them.
> > A bunch of lego bricks networked together does not give you 20PFLOP,
> > so it does not a supercomputer make.
> >
> > However, there is a different point of view emerging since the mid
> > 1990s based on concentrated versus distributed models. Since the
> > clustering of cheap and power efficient microcomputers is now
> > possible because of operating system and networking advances,
> > we often hear of amazing feats of collective CPU power obtained
> > by hooking together old Xboxes with GPUs, (Beowulf - TFLOP range)
> > or using opportunistic distributed networks to get amazing power
> > out of unused cycles (eg SETI at home/BOINC and other volunteer
> > arrays, or 'botnets' used by crackers) (tens to hundreds of TFLOPS).
> 
> Clustering is currently the most scalable model for supercomputers.
> Many expensive options exist for systems with large numbers of cores
> and shared memory--but year after year, more circuits get put on a
> single die.  Generally when you think of supercomputers these days,
> it's a network of systems that each have a lot of x86_64 cores and a
> maybe nice co-processor (like the NVIDIA Tesla's).
> 
> Some of the IBM machines (and Cray, still?) use pipelined multi-core
> processors of a different architecture and 1000s of cores on a single
> system, but I don't see that as a trend that will survive.
> 
> Chuck



More information about the Pd-list mailing list