Supercomputers vs Clusters
Discussions focus on the differences between supercomputers and large HPC clusters, including interconnect speeds, architectures, energy efficiency, and suitability for parallel tasks like simulations versus embarrassingly parallel workloads.
Activity Over Time
Top Contributors
Keywords
Sample Comments
Looks like you need to use Scala for this.As for the comparison with supercomputing: this falls decidedly in the abstracted/virtualized/cloud category, which is basically the opposite of a supercomputer, where you're very close to the metal (i.e. C/C++ or Fortran). Efficient supercomputer usage requires hard thinking about communications (both between nodes with MPI and between the RAM and the CPU on each node), avoiding cache thrashing at all cost, and making use of all t
Supercomputers are exactly the wrong sort of tool to use for this- nearly every supercomputer has crappy disk IO and tons of fast CPU and RAM and network. Using a supercomputer for this would leave the expensive elements like the GPUs nearly idle, and the IO subsystem would be the bottleneck.
I see that you talk about "large clusters". In the HPC community there is often made a distinction between clusters and supercomputers, where the latter implies fast interconnect between the nodes, allowing synchronization of data between the steps to be fast. Such fast interconnection is often required for some workloads like weather forecasting, simulation of biomolecules. On clusters without such fast interconnection, it is not possible to parallelize such problem beyond a dozen of
It's not entirely embarrassingly parallel because photons from nearby light sources interact, so you need to do a parameter exchange. That said, it is true that it doesn't need the low latency fabric that supercomputers traditionally used (though these days the architecture isn't all that different from what you'd have in a traditional data center). We were actually approached by Microsoft to duplicate the demo on Azure, but they ended up being unable to find enough spare ca
yes, they seem to be using the Power 7 architecture by IBM , which is quite energy efficient compared to most other architectures, and also they are planning to use UPC(Unified Parallel C) or similar languages for applications running on the system, that a good step towards better usability. But that does not matter as a super computer is always(mostly) used by a small group of highly qualified scientists.
Aren't most supercomputers clusters of racked machines?
Those supercomputers must be good for something...
Well, your impression is just wrong. Simple as that. No shame in it.Today what is called a super computer is usually just a cluster (i.e. multiple connected normal spec computers). It is normally connected with a high speed interconnect though (100 Gbit/sec and more) what is its most defining capability.Why they are using this cluster? My speculation is that probably because it is available and does not have much use for the real scientific computing (because it is old <a href="https:
It's not a waste of resources, it's just a different approach to solving a problem. Hadoop / "big data" clusters make the problem harder to solve (and probably even restrict the types of problems that can be solved) in exchange for cheap hardware. Supercomputers give engineers the ability to solve problems in a traditional manner, while moving the costs over to the hardware.
Was wondering the same, but for HPC clusters :)