Facilities

Facilities

The ACIS lab is currently equipped with state-of-the-art computing, storage and networking facilities, and has a unique environment for the experimental research and design of distributed systems, software defined computing, and big data analysis.

In addition to personal computers used by ACIS researchers, the ACIS lab owns and operates over 275 servers consisting of approximately 2000 processor cores, 10 TB of memory, and 800 TB of storage. 

  • 16 nodes; SuperMicro 6027TR-HTRF. 192 cores, Six-core Xeon (Sandy Bridge) 2 GHz. 2048 GB RAM, 128 GB per node. 3.75 TB storage raw SSD, 240 GB SSD per node. (iDigBio Compute)
  • 8 nodes; SuperMicro SC936E16-R1200B. 68 cores, Quad-core Xeon (Westmere) 2.4 GHz. 200 GB RAM; 24 GB per node. 256 TB storage raw, 32 TB per node. (iDigBio Storage)
  • 35 nodes; 32 IBM iDataPlex dx360 compute, 2 x3650 management, 1 dx360 storage node. 280 cores; 8 cores per node, Quad-core Xeon (Nehalem) 2.27 GHz. 840 GB RAM; 24 GB per node. 40 TB storage raw; 500GB per compute, 24 TB storage. (Future Grid)
  • 12 nodes; IBM x3850 M2 and X5 configured in 4 NUMA machines. 256 cores; 64 cores per NUMA machine; Quad-core Xeon (Tigerton) 2.93 GHz and 8-core Xeon (Nehalem) 2.0 GHz. 2 TB RAM; 512 GB per NUMA machine. 173 TB storage raw; 160 TB bulk, 9.6 TB performance, 3.5 TB local. (MRI NUMAcloud)
  • 42 nodes; IBM HS22 blades. 336 cores; 8 cores per node, Quad-core Xeon (Westmere) 2.53 GHz. 2016 GB RAM; 48 GB per node. 20 TB storage raw; 500 GB per node. (Cluster 19)
  • 12 nodes; 11 Dell M610 compute, 1 R510 storage. 140 cores; Dual Six-core Xeon (Nehalem) 2.4 GHz compute node, Dual Quad-core Xeon storage node. 540 GB RAM; 48 GB per compute node, 12GB storage node. 5.7 TB storage raw; 160 GB per compute node, 4 TB storage node. (Archer)
  • 9 nodes; Dell R610 and R710. 72 cores; 8 cores per node, Quad-Core Xeon (Nehalem) 2.4GHz. 216 GB RAM; 24 GB per node. 13 TB storage raw; 160GB – 5TB per node. (DiRT)
  • 28 nodes; IBM HS22 blades. 224 cores; 8 cores per node, Dual Core Xeon (Westmere) 2.53 GHz. 672 GB RAM; 24 GB per node. 8.9 TB storage raw; 320 GB per node. (Autonomic Testbed)
  • 44 nodes; HS21 blades. 264 cores; Dual and Quad-core Xeon (Nehalem). 528 GB RAM total; 8-16 GB per node. 10 TB storage raw; 500 GB per node. (Cluster 17)
  • 14 nodes; HS21 blades. 28 cores; Dual-core Xeon (Nehalem). 60 GB RAM total; 4-8 GB RAM per node. 1 TB storage raw; 36-500 GB per node. (Cluster 9)
  • 14 nodes; IBM HS21 blades. 56 cores total; 4 cores per node, Dual Core Xeon (Woodcrest) 2.33GHz. 112 GB RAM total; 8 GB per node. 1 TB storage raw; 73 GB per node. (Cluster 8)
  • 14 nodes; IBM HS21 blades. 56 cores total; 4 cores per node, Dual Core Xeon (Woodcrest) 2.33GHz. 56 GB RAM total; 4 GB per node. 1 TB storage raw; 73 GB per node. (Cluster 7)

 

iDigBio: Integrated Digitized Biocollections

This is a national resource funded by the National Science Foundation for Advancing Digitization of Biological Collections (ADBC). Through iDigBio, millions of biological specimens will be made available in electronic format for the biological research community, government agencies, students, educators, and the general public. iDigBio web site.

Nodes 10 nodes
Cores 68 cores; 8 cores per node
Memory 200 GB; 24 GB per node
Storage 256 TB raw; 32 TB per node
Platform OpenStack Swift object store and Riak key-value store

FutureGrid Test-bed

This IBM iDataplex rack is funded by NSF grant 0910812, “FutureGrid: An Experimental, High-Performance Grid Test-bed.” This system allows researchers to submit experiment plans that are executed via a workflow engine that ensures reproducibility. It is connected to the National Lambda Rail network at 10 Gbps. FutureGrid web site.

Nodes 36 nodes; 32 IBM iDataPlex dx360 compute, 2 x3650 management, 1 dx360 storage, 1 Dell R310 networking
Cores 284 cores; 8 cores per node, Quad-core Xeon (Nehalem) 2.27 GHz
Memory 848 GB; 24 GB per node
Storage 41 TB raw; 500 GB per compute, 24 TB storage
Platform Eucalyptus, Nimbus, Torque/Moab, and others

NUMAcloud

Funded by the NSF grant 0821622 titled “MRI: Acquisition of Instrumentation for Coupled Experimental-Computational Neuroscience and Biology Research,” this system provides virtual machines to University of Florida and other researchers in the medical and biological sciences.

Nodes 12 nodes; IBM x3850 M2 and X5 configured in 4 NUMA machines
Cores 256 cores; 64 cores per NUMA machine; Quad-core Xeon (Tigerton) 2.93 GHz and 8-core Xeon (Nehalem) 2.0 GHz
Memory 2 TB; 512 GB per NUMA machine
Storage 173 TB raw; 160 TB bulk, 9.6 TB performance, 3.5 TB local
Platform Citrix XenServer 5.6

Archer

This Dell blade center is funded by NSF grant number 0750884 “Archer: Seeding a Community-based Computing Infrastructure for Computer Architecture Research and Education.” It provides grid services for researchers and educators to run computer simulation tools. Archer project web site.

Nodes 12 nodes; 11 Dell M610 compute, 1 R510 storage
Cores 140 cores; Dual Six-core Xeon (Nehalem) 2.4 GHz compute, Dual Quad-core Xeon storage
Memory 540 GB; 48 GB per compute node, 12 GB storage node
Storage 5.7 TB raw; 160 GB per compute, 4 TB storage
Platform Ubuntu Server with Condor

DiRT

This cluster is funded by the NSF grant titled “Distributed Research Testbed (DiRT).” The hardware facilitates research in to new grid technologies by providing direct access to the underlying hardware.

Nodes 9 nodes; varying configurations, Dell R610 and R710
Cores 72 cores; 8 cores per node, Quad-Core Xeon (Nehalem) 2.4 GHz
Memory 216 GB; 24 GB per node
Storage 13 TB raw; 160 GB – 5 TB per node
Platform Varies

Autonomic Testbed

These IBM blade centers are funded by NSF grant 0855087 “An Instrumented Datacenter Infrastructure for Research on Cross-Layer Autonomics”. It will enabel fundamental and far-reaching research focused on cross-layer autonomics for managing and optimizing large-scale datacenters. Cloud and Autonomic Computing Center web site.

Nodes 28 nodes; IBM HS22 blades
Cores 56 cores; 8 cores per node, Dual Core Xeon (Westmere) 2.4 GHz
Memory 672 GB total; 24 GB per node
Storage 8.9 TB raw; 320 GB per node
Platform Open Source Xen on CentOS, Eucalyptus, CometCloud

Cluster 9

This cluster is used for dedicated student machines and administrative services to support the ACIS lab. Students are given console access to personal blades to support their research. Lab services such as web servers and management interfaces are run on virtual machines in a Cirtix Xenserver pool.

Nodes 12 nodes; IBM HS20 and HS21 blades
Cores 28 cores total; 2-4 cores per node, Xeon and Opteron
Memory 60 GB total; 4-8 GB per node
Storage 1 TB raw
Platform Citrix XenServer, Windows, Linux

 

Cluster 8

This cluster runs VMware and is used by students and researchers to explore topics in distributed systems and clouds.

Nodes 14 nodes, IBM HS21 blades
Cores 56 cores total; 4 cores per node, Dual Core Xeon (Woodcrest) 2.33 GHz
Memory 112 GB total; 8 GB per node
Storage 1 TB raw; 73 GB per node
Platform VMware vCenter 4.1 and VMware Server 1 on Fedora Core 6

Cluster 7

This IBM blade center is used to process rat neural signals in real time.

Nodes 14 nodes, IBM HS21 blades
Cores 56 cores total; 4 cores per node, Dual Core Xeon (Woodcrest) 2.33GHz
Memory 112 GB total; 8 GB per node
Storage 1 TB raw; 73 GB per node
Platform VMware Server 1 on Fedora Core 6

Cluster 6

This cluster runs virtual machines supporting the project titled “IPOP: A Self-Organizing Virtual Network for Wide-Area Environments” using VMware products.

Nodes 8 nodes, IBM x306
Cores 16 cores total; 2 cores per node, Pentium D dual-core (Presler) 3 GHz
Memory 32 GB total; 4 GB per node
Storage 1.25 TB raw; 160 GB per node
Platform VMware Server 1 on Fedora Core 7

Cluster 5

This cluster runs virtual machines to support the Telecenter and AFRESH projects.

Nodes 6 nodes, IBM x335
Cores 24 cores total; 4 cores per node, Dual Opteron (Italy) 2 GHz
Memory 48 GB total; 8 GB per node
Storage 480 GB raw
Platform VMware products

Cluster 4

This cluster runs VMware ESX to provide virtual machines for student research.

Nodes 8 nodes, IBM x336
Cores 16 cores total; 2 cores per node, Dual Xeon (Irwindale) 3.2GHz
Memory 32 GB total; 4 GB per node
Storage 1 TB raw
Platform VMware ESX 3, Virtual Infrastructure 5

Cluster 3

This cluster runs virtual machines used for software development and processing. Many large Matlab jobs are tested here.

Nodes 4 nodes; configured in one NUMA machine, IBM X3 architecture
Cores 16 cores total; 4 cores per node, Xeon (Potomac) 3 GHz
Memory 64 GB total; 16 GB per node
Storage 580 GB raw
Platform Xen 3.1 on Fedora Core 8

Cluster 2

This cluster runs the ACIS lab’s Nimbus installation providing Infrastructure-as-a-Service (IaaS) for use in cloud research.

Nodes 34 nodes; 32 IBM x335 plus dual management nodes
Cores 68 cores total; 2 cores per node, Xeon (Prestonia) 2.4 GHz
Memory 102 GB total; 3 GB per node
Storage 2.3 TB raw; 73 GB per node
Platform Nimbus on Fedora Core 8

Cluster 1

This cluster is currently used for graduate-level virtualization coursework.

Nodes 8 nodes, IBM 330 plus eServer management node
Cores 18 cores total; 2 cores per node, Intel Pentium III (Tualatin) 1.4 GHz
Memory 18 GB total; 2 GB per node
Storage 2.8 TB raw
Platform VMware ESX 3

The ACIS server room is connected to the UF campus network redundantly at 1 Gbit/s and to the UF Campus Research Network (CRN) at 40 Gbit/s. The UF campus network provides multiple 1 Gbit/s links to the public Internet. The CRN is connected to the state-wide Florida Lamba Rail (FLR) redundantly at 100 Gbit/s. The FLR is connected to both the National Lamba Rail and Internet2. ACIS servers have reliable high speed access to virtually any host on an internet.

UF was the first institution (April 2013) to meet all requirements to become an Internet2 Innovation Platform, which implies the use of software defined networking (SDN), the implementation of a Science DMZ, and a connection at 100 Gb/s to the Internet2 backbone. An NSF CC-NIE award in 2012 funded the 100 Gb/s switch and an NSF MRI grant awarded in 2012 funded the upgrade of the CRN (Science DMZ) to 200 Gb/s. The upgrade has been operational since the winter of 2013.

Servers in the lab are interconnected with 1 Gbit/s ethernet. Some clusters feature 10 Gbit/s or Infiniband interconnects for even higher throughput.

In addition to the storage allocated to individual clusters, the ACIS lab maintains several storage devices that can be used by any cluster.

  • (3) Supermicro servers with a total of 96 TB of storage available via NFS and CIFS
  • (2) IBM Servers sharing a total of 2 TB of storage with NFS
  • (2) IBM DS300 iSCSI disk arrays with a total of 8.4 TB raw storage
  • (1) IBM DS400 Fibre Channel disk array with a total of 4.2 TB raw storage

Major software packages used by the ACIS lab include the following:

  • Virtualization: Open source Xen, Citrix XenServer, VMWare Workstation, Server, ESX, Player
  • Middleware: Nimbus, Eucalyptus, xCat, Puppet
  • Operating System: CentOS, Fedora, Ubuntu, Windows Server 2003, Server 2008