Parallel Programming Challenges
This cluster focuses on debates about the ease and difficulties of implementing parallelism in programming languages like Go, Rust, C, and others, including discussions on Amdahl's law, auto-parallelization, and overhead costs.
Activity Over Time
Top Contributors
Keywords
Sample Comments
I believe it's not the language preventing it but the nature of parallel computing. The overhead of splitting up things and then reuniting them again is high enough to make trivial cases not worth it. OTOH we now have pretty good compiler autovectorization which does a lot of parallel magic if you set things right. But it's not handled at the language level either.
Is any thought being put into adding parallelism in the future?
Doesn't go also make it relatively easy to write parallel code?
You're describing a performance optimization. Yes, if you write unoptimized code, it often takes some refactoring to support the optimization. Also, Python and Ruby are proof that languages can be tremendously successful at solving a huge array of problems without shared memory parallelism in v1.0. Besides, Crystal is presumably much faster than either of these languages with a single thread, so the relative advantage of (CPU) parallelization is much less.
There are limits to how much parallelism will improve things, see amdahl's law.
Doesn't most speed critical software fit a parallel model ?
Related:βIs Parallel Programming Hard, and, If So, What Can You Do About It?β v2 Is Out - https://news.ycombinator.com/item?id=26537298 - March 2021 (75 comments)Is parallel programming hard, and, if so, what can you do about it? - https://news.ycombinator.com/item?id=22030
What is the cost of parallelism of C though?
If your code is limited by memory bandwidth: Why don't you use a language, which gives you more control about memory? Why do you even parallelize?
You mean if compiler theorists learned about parallel processing...