Big data in terms of volume, intensity, and complexity is one of the main challenges to statisticians. The availability of different data sources has essential implications in collecting colossal data volumes that require other ways to be managed and analyzed. Existing analytical tools cannot be applied easily to big data volumes due to memory and computation constraints. Previously, statistical applications and traditional high-performance-oriented computing have followed independent paths. However, important opportunities now arise that can be addressed by merging the two. As a prominent big data application, statistics is increasingly performance-bound in different fields. HPC is becoming increasingly significant in scaling existing statistical methods to larger and more complex applications and developing novel methods that are amenable to scaling within the constraints that exist in modern HPC architectures. This minisymposium aims to show the existing efforts to harness the HPC capabilities in different statistics branches to serve large-scale statistics. It also aims to cover the current challenges and opportunities towards exploiting current HPC technologies in accelerating applications related to applied statistics.