International E-publication: Publish Projects, Dissertation, Theses, Books, Souvenir, Conference Proceeding with ISBN.  International E-Bulletin: Information/News regarding: Academics and Research

Concept of High Performance Computing

Author Affiliations

  • 1Department of Computer Science and Engineering, Bhilai Institute of Technology, Durg, C.G., India
  • 2Department of Computer Science and Engineering, Bhilai Institute of Technology, Durg, C.G., India
  • 3Department of Computer Science and Engineering, Bhilai Institute of Technology, Durg, C.G., India

Res. J. Computer & IT Sci., Volume 11, Issue (1), Pages 1-6, June,20 (2023)

Abstract

For any computational problem there can be more than one solution with different computational resource demands and execution time. One of the prominent factors while considering the performance of various solutions is execution time. High Performance computing techniques and models deals with the challenges of handling problems at massive scale using computing infrastructures, tools, techniques and parallel algorithm designing programming skills. With the advent of new HPC paradigms and significant improvements in processor design, it’s now feasible to employ HPC techniques on many new compute intensive domains. The game changer in HPC has been the developments of the last decade with the introduction of GPUs and FPGAs which together have revolutionized the HPC computing space. This paper presents a comprehensive review of the three major computational models used in HPC – multi core, cluster and GPGPU.

References

  1. Caulfield, B. (2009)., What’s the Difference Between a CPU and a GPU?., NVIDIA. URL: https://blogs. nvidia. com/blog/2009/12/16/whats-the-difference-between-a-cpu-and-a-gpu.
  2. Stephan Perkins (2021)., GPGPU: Definition, Differences & Example., https://study.com/academy/lesson/gpgpu-definition-differences-example.html 21/04/2021
  3. Shivprasad Koirala (2021)., Concurrency vs Parallelism., https://www.codeproject.com/Articles/1267757/Concurrency-vs-Parallelism 23/04/2021
  4. Gabriel southern (2021)., Main difference between shared memory and distributed memory., https://stackoverflow. com/questions/36642382/main-difference-between-shared-memory-and-distributed-memory 28/04/2021
  5. Arnab Chakraborty (2021)., What is OpenMP?., https://www.tutorialspoint.com/what-is-openmp 30/04/2021
  6. Ashwini Ms. and Bhugul M. (2017)., Parallel computing using OpenMP., IJCSMC, 6(2).
  7. Blume, H., von Livonius, J., Rotenberg, L., Noll, T. G., Bothe, H., & Brakensiek, J. (2008)., OpenMP-based parallelization on an MPCore multiprocessor platform–A performance and power analysis., Journal of Systems Architecture, 54(11), 1019-1029.
  8. LLNL (2021)., Message passing interface., https://hpc-tutorials.llnl.gov/mpi/ 01/05/2021
  9. MPI (2021), Message passing interface., https://wstein.org/msri07/read/Message%20Passing%20Interface%20(MPI).html 02/05/2021
  10. LLNL (2021)., Collective communication routines., https://hpc-tutorials.llnl.gov/mpi/collective_communication _routines/ 05/05/2021
  11. Wes Kendall (2021)., MPI scatter gather all gather., https://mpitutorial.com/tutorials/mpi-scatter-gather-and-allgather/ 08/08/2021
  12. Andrey V Tabakov and Alexey A Panzikov (2019)., Using relaxed concurrent data structure for contention minimization multithreaded MPI program., Journal of Physics, 1399(3), 033037
  13. Fred OH (2021)., What is CUDA?., https://blogs.nvidia.com/blog/2012/09/10/what-is-cuda-2/ 13/05/2021
  14. Sikdar (2021)., Sandipan Sikdar., https://medium.com/
  15. sikdar_sandip/cpu-vs-gpu-1e1264204920 15/05/2021, undefined, undefined
  16. Ghorpade, J., Parande, J., Kulkarni, M., & Bawaskar, A. (2012)., GPGPU processing in CUDA architecture., arXiv preprint arXiv:1202.4347.
  17. CUDA (2021)., Thread and block heuristics in cuda programming., http://cuda-programming.blogspot.com/20 13/01/thread-and-block-heuristics-in-cuda.html 20/05/2021
  18. Samel, B., Mahajan, S., & Ingole, A. M. (2016)., Gpu computing and its applications., International Research Journal of Engineering and Technology, 3(04).