• CONTACT
  • LAST ISSUE
  • IN PROGRESS
  • EARLY VIEW
  • ACCEPTED PAPERS
GET_pdf

Volume 24 (4) 2018, 249–258

Using GPU Accelerators for Parallel Simulations in Material Physics

Uchroński Mariusz 1*, Potasz Paweł 2, Szymańska-Kwiecień Agnieszka 1, Hruszowiec Mariusz 3

1 Wroclaw Centre of Networking and Supercomputing (WCSS)
Wroclaw University of Science and Technology

2 Department of Theoretical Physics
Wroclaw University of Science and Technology

3 Department of Telecommunications and Teleinformatics
Wroclaw University of Science and Technology

*E-mail: mariusz.uchronski@pwr.edu.pl

Received:

Received: 13 April 2018; revised: 24 November 2018; accepted: 26 November 2018; published online: 24 December 2018

DOI:   10.12921/cmst.2018.0000025

Abstract:

This work is focused on parallel simulation of electron-electron interactions in materials with non-trivial topo- logical order (i.e. Chern insulators). The problem of electron-electron interaction systems can be solved by diagonalizing a many-body Hamiltonian matrix in a basis of configurations of electrons distributed among possible single particle energy levels – the configuration interaction method. The number of possible configurations exponentially increases with the number of electrons and energy levels; 12 electrons occupying 24 energy levels corresponds to the dimension of Hilbert space about 106. Solving such a problem requires effective computational methods and highly efficient optimization of the source code. The work is focused on many-body effects related to strongly interacting electrons on flat bands with non-trivial topology. Such systems are expected to be useful in study and understanding of new topological phases of matter, and in further future they can be used to design novel nanomaterials. Heterogeneous architecture based on GPU accelerators and MPI nodes will be used for improving performance and scalability in parallel solving problem of electron-electron interaction systems.

Key words:

GPU computing, material physics, MPI, OpenMP

References:

[1] A. Szabo, N.S. Ostlund, Modern quantum chemistry: introduction to advanced electronic structure theory, New York 1982.
[2] T.Neupert,L.Santos,C.Chamon,C.Mudry,FractionalQuantum Hall States at Zero Magnetic Field, Physical Review Let- ters 106, 236-804 (2011).
[3] Y.Wang,Z.Gu,C.Gong,D.N.Sheng,FractionalQuantum Hall Effect of Hard-Core Bosons in Topological Flat Bands, Physical Review Letters 107, 146-803 (2011).
[4] D.N. Sheng, Z. Gu, K. Sun, L. Sheng, Fractional quantum Hall effect in the absence of Landau levels, Nature Communi- cations 2, (2011).
[5] N.Regnault,B.A.Bernevig,FractionalChernInsulator,Phys- ical Review X 1, 21-14 (2011)
[6] B. Jaworowski, A. Manolescu, P. Potasz, Fractional Chern insulator phase at the transition between checkerboard and Lieb lattices, Physical Review B 92, 245-119 (2015).
[7] A.D. Güçlü, P. Potasz, O. Voznyy, M. Korkusinski, P. Hawrylak, Magnetism and Correlations in Fractionally Filled De- generate Shells of Graphene Quantum Dots, Physical Review Letters 103, 246-805 (2009).
[8] A.D.Güçlü,P.Potasz,P.Hawrylak,ElectronicShellsofDirac Fermions in Graphene Quantum Rings in a Magnetic Field, Acta Physica Polonica A 116, 832-834 (2009).
[9] T.M.Lahey,T.M.Ellis,FORTRAN90Programming,Boston 1994.
[10] B.Chapman,G.Jost,R.vanderPas,UsingOpenMP:Portable Shared Memory Parallel Programming, Cambridge 2007.
[11] OpenMP specification, http://www.openmp.org/specificat ions/, [Online; accessed 14-November-2018].
[12] NVIDIACUDACProgrammingGuide,https://docs.nvidia. com/cuda/pdf/CUDA_C_Programming_Guide.pdf.
[13] MPI-forum, http://mpi-forum.org/, [Online; accessed 14- November-2018].
[14] OpenMPIhomepage,https://www.open-mpi.org/,[Online;ac- cessed 14-November-2018].
[15] M. Hruszowiec, P. Potasz, A. Szyman ́ska-Kwiecien ́. M. Uchron ́ski, Using GPU Accelerators for improving Perfor- mance and Scalability in Material Physics Simulations, www. prace-ri.eu/IMG/pdf/WP235.pdf, 2017, [Online; accessed 14-November-2018].
[16] B.B. Gursoy, M. Browne, M. Lysaght, Evaluation of Tools and Techniques for Future Exascale Systems, www.prace- ri.eu/IMG/pdf/D7.4_4ip.pdf”, 2017, [Online; accessed 14- November-2018].
[17] G.M. Amdahl, Validity of the Single Processor Approach to Achieving Large Scale Computing Capabilities, Proceedings of the Spring Joint Computer Conference, 483-485 (1967).
[18] J.L.Gustafson,ReevaluatingAmdahl’sLaw,Communications of the ACM 31, 532-533 (1988).

  • JOURNAL MENU

    • AIMS AND SCOPE
    • EDITORS
    • EDITORIAL BOARD
    • NOTES FOR AUTHORS
    • CONTACT
    • IAN SNOOK PRIZES 2015
    • IAN SNOOK PRIZES 2016
    • IAN SNOOK PRIZES 2017
    • IAN SNOOK PRIZES 2018
    • IAN SNOOK PRIZES 2019
    • IAN SNOOK PRIZES 2020
    • IAN SNOOK PRIZES 2021
    • IAN SNOOK PRIZES 2024
  • GALLERY

    CMST_vol_25_4_2019_okladka_
  • LAST ISSUE

  • MANUSCRIPT SUBMISSION

    • SUBMIT A MANUSCRIPT
  • FUTURE ISSUES

    • ACCEPTED PAPERS
    • EARLY VIEW
    • Volume 31 (1) – in progress
  • ALL ISSUES

    • 2024
      • Volume 30 (3–4)
      • Volume 30 (1–2)
    • 2023
      • Volume 29 (1–4)
    • 2022
      • Volume 28 (4)
      • Volume 28 (3)
      • Volume 28 (2)
      • Volume 28 (1)
    • 2021
      • Volume 27 (4)
      • Volume 27 (3)
      • Volume 27 (2)
      • Volume 27 (1)
    • 2020
      • Volume 26 (4)
      • Volume 26 (3)
      • Volume 26 (2)
      • Volume 26 (1)
    • 2019
      • Volume 25 (4)
      • Volume 25 (3)
      • Volume 25 (2)
      • Volume 25 (1)
    • 2018
      • Volume 24 (4)
      • Volume 24 (3)
      • Volume 24 (2)
      • Volume 24 (1)
    • 2017
      • Volume 23 (4)
      • Volume 23 (3)
      • Volume 23 (2)
      • Volume 23 (1)
    • 2016
      • Volume 22 (4)
      • Volume 22 (3)
      • Volume 22 (2)
      • Volume 22 (1)
    • 2015
      • Volume 21 (4)
      • Volume 21 (3)
      • Volume 21 (2)
      • Volume 21 (1)
    • 2014
      • Volume 20 (4)
      • Volume 20 (3)
      • Volume 20 (2)
      • Volume 20 (1)
    • 2013
      • Volume 19 (4)
      • Volume 19 (3)
      • Volume 19 (2)
      • Volume 19 (1)
    • 2012
      • Volume 18 (2)
      • Volume 18 (1)
    • 2011
      • Volume 17 (1-2)
    • 2010
      • Volume SI (2)
      • Volume SI (1)
      • Volume 16 (2)
      • Volume 16 (1)
    • 2009
      • Volume 15 (2)
      • Volume 15 (1)
    • 2008
      • Volume 14 (2)
      • Volume 14 (1)
    • 2007
      • Volume 13 (2)
      • Volume 13 (1)
    • 2006
      • Volume SI (1)
      • Volume 12 (2)
      • Volume 12 (1)
    • 2005
      • Volume 11 (2)
      • Volume 11 (1)
    • 2004
      • Volume 10 (2)
      • Volume 10 (1)
    • 2003
      • Volume 9 (1)
    • 2002
      • Volume 8 (2)
      • Volume 8 (1)
    • 2001
      • Volume 7 (2)
      • Volume 7 (1)
    • 2000
      • Volume 6 (1)
    • 1999
      • Volume 5 (1)
    • 1998
      • Volume 4 (1)
    • 1997
      • Volume 3 (1)
    • 1996
      • Volume 2 (1)
      • Volume 1 (1)
  • DATABASES

    • AUTHORS BASE
  • CONTACT
  • LAST ISSUE
  • IN PROGRESS
  • EARLY VIEW
  • ACCEPTED PAPERS

© 2025 CMST