11
–––––––––––––––––––––––
A Framework for Semiautomatic Explicit Parallelization
Ritu Arora, Purushotham Bangalore, and Marjan Mernik
11.1 INTRODUCTION
With advancement in science and technology, computational problems are growing in size and complexity, thereby resulting in higher demand forhigh-performance computing (HPC) resources. To keep up with competitive pressure, the demand for reduced time to solution is also increasing, and simulations on high-performance computers are being preferred over physical prototype development and testing. Recent studies have shown that though HPC is gradually becoming indispensable for stakeholders'growth, the programming challenges associated with the development of HPC applications (e.g., lack of HPC experts, learning curve, and system manageability) are key deterrents to adoption of HPC on a massive scale [1, 2]. Therefore, a majority of organizations (in science, technology, and business domains) are stalled at the desktop-computing level. Some of the programming challenges associated with HPC application development are the following:
- There are multiple parallel programming platforms and hence multiple parallel programming paradigms, each best suited for a particular platform. Forexample, message-passing interface (MPI) [3] is best suited for developing parallel programs for distributed-memory architectures, whereas OpenMP [4] is widely used for developing applications for shared-memory architectures.
- It is increasingly difficult ...
Get Scalable Computing and Communications: Theory and Practice now with the O’Reilly learning platform.
O’Reilly members experience books, live events, courses curated by job role, and more from O’Reilly and nearly 200 top publishers.