Cosmos

History

The Cosmos Cluster is operating since 2007 when it was acquired by funding from FAPESP project and CNPq- Edital MCT/CNPq – 15/2007 – Universal. At that time the structure of the cluster was accommodated in racks and nodes interconnected with a simple switch. The first switch was a 3Comm Fast Ethernet, responsible for interconnecting between nodes. In 2012 the cluster suffered the first modification. Motherboards, memories and 2U chassis were exchanged. A rack was acquired to accommodate all nodes. In 2013 the cluster receives a KVM Switch, a new Gigabit Ethernet Switch with 48 ports supporting VLANs and a TFT (mouse, keyboard, and video – all in one) tray to assist in the maintenance of nodes. The cluster is currently used for running code in MPI and OpenMP and also to experiment with virtualization technology. It can also be used to conduct experiments in the field of simulation. The latest cluster updates have been supported by new FAPESP projects since 2008 until 2013.

Acesss

Command-Line SSH Client

Linux

If you are connecting to the Cosmos cluster from Linux/Unix and are using a command line ssh client, just type:

  • ssh <username>@cosmos.lasdpc.icmc.usp.br -p 22200

where <username> is your Cluster User Id and “-p 22200″ is the port to access the cluster frontend. If you have not ssh’ed to the cluster from that machine before you will get a message stating that the authenticity of the host cannot be established, it will give you an RSA key, and ask if you want to continue. Type ‘yes’ to continue, and then enter your password when it prompts you for it.

Mac OS X

If you are using Mac OS X, start the Terminal.app application by going to the Finder -> Applications -> Utilities -> Terminal. Once the terminal window comes up, type:

  • ssh <username>@cosmos.lasdpc.icmc.usp.br

where <username> is your Cluster User Id and “-p 22200″ is the port to access. If you have not ssh’ed to the cluster from that machine before you will get a message stating that the authenticity of the host cannot be established, it will give you an RSA key, and ask if you want to continue. Type ‘yes’ to continue, and then enter your password when it prompts you for it.

Graphical SSH Client

 Windows

There are several ssh clients for Windows such as PuTTY (a Google seach will lead to this download), SSH Secure Shell, and Cygwin  which is not only an ssh client, but a bash shell environment and X server. See www.cygwin.com for download instructions.  If you are using Secure Shell for Windows, start the application, and click the ‘Quick Connect’ button. Then enter the cluster name:

  • cosmos.lasdpc.icmc.usp.br and your username.

Leave the port set to 22200, hit Enter, and when it prompts you for your password, enter your Cluster password.

Configuration

Our cluster is implemented in a structure that is composed of nodes for MPI programming and also for virtualization tests. The nodes of cluster can also be used for workload generation. The physical structure is best described as follows.

Hardware

Nodes (19 hosts / 72 virtuais) – 18 slaves nodes / 1 master node

Intel® Core™2 Quad Processor Q9400 (6M Cache, 2.66 GHz, 1333 MHz FSB)
8 GB RAM DDR3 Kingston
Motherboard Gigabyte G41-MT-S2P
HD 160GB Seagate Sata II 7200RPM
Fonte ATX 300W Real

Switches (2)

HP V1910-48G (JE009A) w/ 48 ports 10/100/1000Mbps + 4x mini-Gbic SFP (RJ-45 or Fiber)
Switch 3Com 2920-SFP Plus 16 ports Gigabit Switch – 3CRBSG209

Software

Apache Axis2/Java
Java JDK 1.7
Ant
Apache Tomcat
Jenkins
SVN/Git Clients
OpenMPI
OpenMP
CloudSIM – (Comming Soon)

Operating System

Linux Ubuntu Server 14.04.1 LTS – 64 Bits (Updated: 28/09/14)
ClearOS 6.0 – 64 Bits

Nodes Description

The Cosmos cluster consists of 19 nodes, one being called the master node. This is a Bewoulf Cluster where the home directory is shared with the slave nodes. So you can run code in MPI (Message Passing Interface). Each node consists of 3 gigabit ethernet network interfaces, which are described below.

eth0 = MPI communicatoin between frontend and slave nodes

eth1 = Communication between nodes for virtual machines

eth2 = Communication for workload generation

Thus, communication between nodes with master and slave when using MPI should be performed via eth0. For this, the hosts file must contain the names or IPs in red column.

Name (Common Use) IP (Common Use) Name (MPI Use) IP (MPI Use)
frontend 192.168.0.200 frontend 10.1.1.200
node1 192.168.0.10 iris 10.1.1.10
node2 192.168.0.20 flora 10.1.1.20
node3 192.168.0.30 tetis 10.1.1.30
node4 192.168.0.40 adria 10.1.1.40
node5 192.168.0.50 ceres 10.1.1.50
node6 192.168.0.60 doris 10.1.1.60
node7 192.168.0.70 hebe 10.1.1.70
node8 192.168.0.80 irene 10.1.1.80
node9 192.168.0.90 palas 10.1.1.90
node10 192.168.0.100 maia 10.1.1.100
node11 192.168.0.110 isis 10.1.1.110
node12 192.168.0.120 lidia 10.1.1.120
node13 192.168.0.130 nisa 10.1.1.130
node14 192.168.0.140 rosa 10.1.1.140
node15 192.168.0.150 alice 10.1.1.150
node16 192.168.0.160 olga 10.1.1.160
node17 192.168.0.170 carina 10.1.1.170
node18 192.168.0.180 sara 10.1.1.180