The computing cluster is managed by the Physics Department. It is funded and operated in conjunction with other departments including the Statistics Department. These pages reflect information taken from the Physics Wiki and should be current. If deviations are noted please defer to the Physics Wiki and contact the Statistics Department so that updates can be made.

The Computing Committee hosts departmental computing resources at Github.

Physics Department: Physics Department

For questions related to the cluster please contact

You can watch the seminar “An Introduction to the UConn Statistics Cluster” in this link. This link will work for anyone in University of Connecticut.

See also

Hardware Configuration

A cluster of 32 computers has been assembled to facilitate parallel computation in the field of Statistics. Each computer “node”, Dell PowerEdge SC1435, features:

  • 2 x 4-core AMD Opteron 2350 processors (2 GHz)
  • 8 GB of Memory (667 MHz)
  • 250 GB hard drive (SATA, 7.2k RPM, 3 Gbps)

It is best to think of every core as a separate virtual machine or processing slot capable of running one process. The cores remain isolated because the computer cannot distribute a simple, stand-alone process between its cores without special instructions for doing so within the code itself. We will therefore refer interchangeably to cores, slots or virtual machines (of which there are 8 x 32) instead of computers or processors as independent computing units, each having about 1 GB of RAM at its disposal when all cores are uniformly loaded.

The Statistics Cluster is intergrated into existing computing infrastructure in the Nuclear Physics lab at the Physics Department. The lab provides its computing resources as well as networking, file server, security and other services. Below is a rough list of computing resources available for use (as of 2010):

Statistics Physics Geophysics
Architecture 64 bit 64 bit 32 bit 64 bit
Cores 248 192 72 34
Performance (Gflops) 322 260 72 34


The following is a selected list of the available software:

  • GCC compiler package (4.4.7)
  • PGI Fortran Compiler (7.2)
  • MPICH2 (1.2.1)
  • Open MPI (1.5.4)
  • LAM-MPI (7.1.14),
  • Binding-site Estimation Suite of Tools (BEST)
  • Usual Linux tools and scripting languages
  • Condor (7.8.8)
  • CERNLIB (2005)
  • IMSL** (C: 7.0, Fortran: 6.0)
  • Matlab (7.3 – R2006b)
  • ROOT (5.34)
  • R (3.0)

** Note that some software, like IMSL, has a license limited to the Statistics Department equipment and does not span the other cluster segments **

Current Status

While at the command line, you may analyze performance with the Condor scheduling program.  When using a web browser, you may view currently available resources and their utilization may using:

  • Ganglia – for comprehensive usage statistics
  • Cluster load summary – for a simple summary of cluster resource load


Purchase of the cluster and related software was partially supported by NSF Scientific Computing Research Environments for the Mathematical Sciences (SCREMS) Program grant 0723557 to M.H. Chen, Z. Chi (PI), D. Dey and O. Harel.