Overview
  Research
  Development
  Dissemination
  Services
  Home
Theoretical Biophysics Group
TB Linux/x86 Cluster
NIH Resource for Macromolecular Modeling and Bioinformatics
UIUC

Current Configuration

titania, ariel, puck, new oberon, umbriel, portia

titania and ariel each consist of 16 1100MHz Athlon CPUs with 256 MB of memory each. puck is another 2 nodes, identical to the above. oberon consists of 32 slightly (20%) faster systems. umbriel is identical to oberon. portia is 2 nodes identical to oberon.
  • 1100 MHz AMD Athlon CPU (titania, ariel, puck), 1333 MHz (oberon, umbriel, portia)
  • Asus A7V Motherboard (titania, puck), Asus A7A266 Motherboard (oberon, umbriel)
  • 256 MB CAS2 SDRAM (mix of ECC and non ECC)
  • 20GB 7200rpm Western Digital hard drive
  • 100Mbps Ethernet Cards - mix of:
    • Intel EtherExpress (two on root node)
    • 3Com 3C905B (eventually replaced with more Intel cards for homogeneity)
  • Floppy and CD-ROM drives
  • Scyld Linux
  • Built by Champaign Computer

Each cluster is controlled as a single root node, which is controlled much like a standard Linux box. Computations are divided up and transferred to the rest of the cluster by a single front end machine. The other machines have no keyboard, mouse, or monitor; they only connect via the network. All of the machines are connected through four 3Com SuperStack III 3300TM 24-port switches, each with a Gigabit uplink and a matrix port for attaching additional switches, and two 3300 III 3300MM, which have three matrix ports to connect the other switches.

A breakdown of costs (at the time):
Component Cost Per Count Total Cost
PCs (avg) $940.46 106 $99,688.50 6 spares
Add. Network Cards $50 4 $200
Ethernet Switches (avg) $1,403.62 6 $8,421.72 4x 3Com 3300TM
2x 3Com 3300MM
Shelving + Cables $1200 $1200
Total $109,510.22

old oberon (retired)

oberon originally consisted of 16 dual-400 MHz Pentium II CPU's with 256MB of memory each.
  • Dual 400 MHz Intel Pentium II CPU's
  • 256 MB 100MHz SDRAM
  • 2.2 GB SCSI hard drive (9 GB on master node)
  • CD-ROM and floppy drives
  • Fast ethernet interface (second interface on master node, unused)
  • RedHat Linux
  • Custom built by VA Research
Each of these machines was connected, via switchbox, to a single shared monitor and keyboard. They were connected to the building network via a Bay Networks BayStack 350T 16-port fast ethernet switch, which connected to the building backbone.

As of February 2001, oberon has been retired. Its nodes were given modern graphics boards (specifically the GeForce2 GTS), and moved to researcher desktops. Each node is now a 3-D capable workstation.

Shared Resources

Both clusters are stored in industrial shelving units (five 36"x18"x85" units, and one 48"x18"x85") and powered by a Best Power FERRUPS uninterruptible power system.

History

  • 1594: Oberon and Titania are king and queen of the fairies in Shakespeare's A Midsummer-Night's Dream.
  • 1787: Titania and Oberon, the largest and second largest of Uranus's known satellites, are discovered.
  • 1851: Umbriel and Ariel, the third and fourth largest of Uranus' known satellites, are discovered.
  • 1986: Puck and Portia, two of Uranus' smaller moons, are discovered by Voyager 2.
  • July 1998: First eight nodes, switch, etc. installed for oberon.
  • August 1998: Additional eight nodes of oberon installed.
  • August 2000: oberon retirement begins, as its nodes are moved to desktops.
  • January 2001: First eighteen nodes, switch, etc installed for titania.
  • February 2001: Additional fourteen nodes installed for titania, bringing total up to 32. puck is installed with 4 nodes. Old oberon cluster retirement completed.
  • April 2001: First sixteen nodes, switch, etc installed for the new oberon cluster. An additional sixteen nodes have been ordered, and will arrive within a few weeks.
  • May 2001: Umbriel, a third cluster of 32 nodes, is ordered.
  • June 2001: Second sixteen nodes of oberon and all of umbriel are installed.
  • August 2001: Half of puck is retired as desktops; the resulting space is used to fit portia.

Performance

Using the molecular dynamics code NAMD we have achieved these speedups on a 92,000 atom system on our clusters:

CPUs 1333 MHz Athlon
Sec/Step Speed-Up Efficiency
1 10.54 1 100%
2 5.45 1.93 96.7%
4 2.83 3.72 93.1%
8 1.52 6.93 86.7%
16 0.83 12.70 79.4%
32 0.47 22.43 70.1%

Tutorials

The Theoretical Biophysics Group has given a tutorial for NCSA's Linux Revolution Conference on 25 Jun 2001. The same tutorial was given on 19 Jun 2001. Materials from this tutorial are here.

Related Sites


Search Site:
Overview Research Development Dissemination Services
 

Back to Top | Home

This document was last modified on Wednesday, 19-Jun-2002 08:42:06 CDT
Material on this page is copyrighted
Contact Webmaster for more information
2581 accesses since 03 Nov 2000