Supercomputers vs Clusters

Discussions focus on the differences between supercomputers and large HPC clusters, including interconnect speeds, architectures, energy efficiency, and suitability for parallel tasks like simulations versus embarrassingly parallel workloads.

📉 Falling 0.3x Hardware
3,653
Comments
20
Years Active
5
Top Authors
#4054
Topic ID

Activity Over Time

2007
7
2008
42
2009
55
2010
94
2011
86
2012
111
2013
151
2014
151
2015
255
2016
207
2017
260
2018
238
2019
206
2020
319
2021
215
2022
308
2023
356
2024
367
2025
209
2026
16

Keywords

RAM CPU AWS LLNL NASA BSP AI NERSC UPC IPC hpc supercomputer clusters parallel simulations nodes synchronization simulation cluster running

Sample Comments

semi-extrinsic Nov 12, 2015 View on HN

Looks like you need to use Scala for this.As for the comparison with supercomputing: this falls decidedly in the abstracted/virtualized/cloud category, which is basically the opposite of a supercomputer, where you're very close to the metal (i.e. C/C++ or Fortran). Efficient supercomputer usage requires hard thinking about communications (both between nodes with MPI and between the RAM and the CPU on each node), avoiding cache thrashing at all cost, and making use of all t

dekhn Aug 13, 2019 View on HN

Supercomputers are exactly the wrong sort of tool to use for this- nearly every supercomputer has crappy disk IO and tons of fast CPU and RAM and network. Using a supercomputer for this would leave the expensive elements like the GPUs nearly idle, and the IO subsystem would be the bottleneck.

alephnil May 4, 2014 View on HN

I see that you talk about "large clusters". In the HPC community there is often made a distinction between clusters and supercomputers, where the latter implies fast interconnect between the nodes, allowing synchronization of data between the steps to be fast. Such fast interconnection is often required for some workloads like weather forecasting, simulation of biomolecules. On clusters without such fast interconnection, it is not possible to parallelize such problem beyond a dozen of

KenoFischer Feb 9, 2021 View on HN

It's not entirely embarrassingly parallel because photons from nearby light sources interact, so you need to do a parameter exchange. That said, it is true that it doesn't need the low latency fabric that supercomputers traditionally used (though these days the architecture isn't all that different from what you'd have in a traditional data center). We were actually approached by Microsoft to duplicate the demo on Azure, but they ended up being unable to find enough spare ca

eerpini Nov 18, 2010 View on HN

yes, they seem to be using the Power 7 architecture by IBM , which is quite energy efficient compared to most other architectures, and also they are planning to use UPC(Unified Parallel C) or similar languages for applications running on the system, that a good step towards better usability. But that does not matter as a super computer is always(mostly) used by a small group of highly qualified scientists.

bastardoperator May 13, 2023 View on HN

Aren't most supercomputers clusters of racked machines?

jmakov Nov 6, 2020 View on HN

Those supercomputers must be good for something...

eveningcoffee Aug 13, 2019 View on HN

Well, your impression is just wrong. Simple as that. No shame in it.Today what is called a super computer is usually just a cluster (i.e. multiple connected normal spec computers). It is normally connected with a high speed interconnect though (100 Gbit/sec and more) what is its most defining capability.Why they are using this cluster? My speculation is that probably because it is available and does not have much use for the real scientific computing (because it is old <a href="https:

stingraycharles Sep 17, 2013 View on HN

It's not a waste of resources, it's just a different approach to solving a problem. Hadoop / "big data" clusters make the problem harder to solve (and probably even restrict the types of problems that can be solved) in exchange for cheap hardware. Supercomputers give engineers the ability to solve problems in a traditional manner, while moving the costs over to the hardware.

teekert Jan 28, 2025 View on HN

Was wondering the same, but for HPC clusters :)