Head Node IPTables example to open port 1099 and save the rules.
Add rule
iptables -A INPUT -p tcp --dport 1099 -j ACCEPT
Add rule
iptables -A OUTPUT -p tcp --dport 1099 -j ACCEPT
Save Rules in the event of reboot
/sbin/service iptables save
A place to where I can record my Technical stuff that should be accessible from anywhere. It is mainly technical stuff I find interesting and I suppose it can be looked on as a modern Lab-book.
Wednesday, 8 September 2010
Run a command on all nodes
This will run the cat command on all nodes and output the results on the head node and redirect the output to a file. This gives a list of hostnames and MAC addresses in a txt file.
[root@HOST~]#cluster-fork cat /etc/sysconfig/network-scripts/ifcfg-eth0:0 | egrep "compute|HWADDR" > HostHWaddr.txt
[root@HOST~]#cluster-fork cat /etc/sysconfig/network-scripts/ifcfg-eth0:0 | egrep "compute|HWADDR" > HostHWaddr.txt
Monday, 30 August 2010
SSH Tunnel Example
Tunnel ssh from Local Machine to Remote Machine
and from Remote Machine to a Local Machine on
Remote Machines network.
Local to remote machine with 5900 tunnel
ssh -L 5900:127.0.0.1:5900 -l username -p 22 theactualurl.net
Remote to machine on remote network with 5900 to 443 tunnel
sudo ssh -L 5900:127.0.0.1:443 -l username -p 22 remotemachine.local
This allowed one on the local machine to connect to a webserver using https on the remote machines network. The address on the local machine is https://localhost:5900 or https://127.0.0.1:5900 and the connection is tunnelled through port 5900 but the actual server uses port 443.
and from Remote Machine to a Local Machine on
Remote Machines network.
Local to remote machine with 5900 tunnel
ssh -L 5900:127.0.0.1:5900 -l username -p 22 theactualurl.net
Remote to machine on remote network with 5900 to 443 tunnel
sudo ssh -L 5900:127.0.0.1:443 -l username -p 22 remotemachine.local
This allowed one on the local machine to connect to a webserver using https on the remote machines network. The address on the local machine is https://localhost:5900 or https://127.0.0.1:5900 and the connection is tunnelled through port 5900 but the actual server uses port 443.
Wednesday, 25 August 2010
NIC IP Aliases
I needed to and Aliases for some IP addresses to the ethernet card on the Rocks Cluster to allow individual nodes to have their own public IP addresses.
I created a copy of the /etc/sysconfig/network-scripts/ifcfg-eth0 to ifcfg-eth0:0. This new file was edited with vi to contain
DEVICE=eth0:0
HWADDR=xx:xx:xx:xx:xx:xx
IPADDR=xxx.xxx.xxx.xx
NETMASK=255.255.255.0
BOOTPROTO=static
ONBOOT=yes
/sbin/service network restart to update the changes made.
/sbin/ifconfig shows the new configuration.
The change was tested by connecting the node switch to the public 211 subnet and the ip address could be pinged from the subnet showing that the changes were working.
Info On Binding IP Addresses
I created a copy of the /etc/sysconfig/network-scripts/ifcfg-eth0 to ifcfg-eth0:0. This new file was edited with vi to contain
DEVICE=eth0:0
HWADDR=xx:xx:xx:xx:xx:xx
IPADDR=xxx.xxx.xxx.xx
NETMASK=255.255.255.0
BOOTPROTO=static
ONBOOT=yes
/sbin/service network restart to update the changes made.
/sbin/ifconfig shows the new configuration.
The change was tested by connecting the node switch to the public 211 subnet and the ip address could be pinged from the subnet showing that the changes were working.
Info On Binding IP Addresses
Thursday, 29 July 2010
Rocks Cluster NAT
The NAT was not enabled on the cluster to allow nodes access to the public network. This was achieved by editing the /etc/sysconfig/iptables with vim and adding the following lines:-
*nat
-A POSTROUTING -o eth1 -j MASQUERADE
COMMIT
to the beginning of the file.
Save the file using wq
Then restart the service using:-
/sbin/service iptables restart
Then sync the config using :-
rocks sync config
That is now the NAT working and allowing internet access from the compute nodes on the private network.
*nat
-A POSTROUTING -o eth1 -j MASQUERADE
COMMIT
to the beginning of the file.
Save the file using
Then restart the service using:-
/sbin/service iptables restart
Then sync the config using :-
rocks sync config
That is now the NAT working and allowing internet access from the compute nodes on the private network.
Rocks Cluster iptables
I needed to modify the iptables to allow connections on some extra tcp ports.
I edited the /etc/sysconfig/iptables file using vim. The extra lines added are shown below:-
# http and https is allowed for all nodes on the public subnet
-A INPUT -m state --state NEW -p tcp --dport 9618 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 9614 -j ACCEPT
The iptables service was restarted using the command /sbin/service iptables restart.
I edited the /etc/sysconfig/iptables file using vim. The extra lines added are shown below:-
# http and https is allowed for all nodes on the public subnet
-A INPUT -m state --state NEW -p tcp --dport 9618 -j ACCEPT
-A INPUT -m state --state NEW -p tcp --dport 9614 -j ACCEPT
The iptables service was restarted using the command /sbin/service iptables restart.
Monday, 5 July 2010
NFS not working on some nodes
NFS not working
The nodes were not working and issuing the command
showed that there was an NFS mounting problem
jimp@compute-2:~$ rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 52121 status
100024 1 tcp 59769 status
The /etc/hosts file was changed to match the other nodes
the rpcinfo command now gives the following.
jimp@compute-5:~$ rpcinfo -p hadoop
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 56797 status
100024 1 tcp 54606 status
100021 1 udp 53628 nlockmgr
100021 3 udp 53628 nlockmgr
100021 4 udp 53628 nlockmgr
100021 1 tcp 53694 nlockmgr
100021 3 tcp 53694 nlockmgr
100021 4 tcp 53694 nlockmgr
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 40126 mountd
100005 1 tcp 37582 mountd
100005 2 udp 40126 mountd
100005 2 tcp 37582 mountd
100005 3 udp 40126 mountd
100005 3 tcp 37582 mountd
The nodes were not working and issuing the command
showed that there was an NFS mounting problem
jimp@compute-2:~$ rpcinfo -p
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 52121 status
100024 1 tcp 59769 status
The /etc/hosts file was changed to match the other nodes
the rpcinfo command now gives the following.
jimp@compute-5:~$ rpcinfo -p hadoop
program vers proto port
100000 2 tcp 111 portmapper
100000 2 udp 111 portmapper
100024 1 udp 56797 status
100024 1 tcp 54606 status
100021 1 udp 53628 nlockmgr
100021 3 udp 53628 nlockmgr
100021 4 udp 53628 nlockmgr
100021 1 tcp 53694 nlockmgr
100021 3 tcp 53694 nlockmgr
100021 4 tcp 53694 nlockmgr
100003 2 udp 2049 nfs
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100003 2 tcp 2049 nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100005 1 udp 40126 mountd
100005 1 tcp 37582 mountd
100005 2 udp 40126 mountd
100005 2 tcp 37582 mountd
100005 3 udp 40126 mountd
100005 3 tcp 37582 mountd
Subscribe to:
Posts (Atom)
Defender 300tdi Lucas 10AS Alarm Immobiliser (Spider) Problems
We have a 1997 Landrover Defender 300tdi that has given immobiliser problems intermittently. I had initially fixed the fuel solenoid as we w...

-
Server failure This server failed with the error "E171F PCIE Fatal Err B0 D3 F0" during power up. Upon inspection it was found th...
-
I bricked my MR3220 router with an OpenWRT firmware upload. I did manage to get it back by soldering a 4 pin header on its pcb to allow conn...
-
I make no guarantees for the information contained herein, I also cannot be held responsible for data loss as a result of your actions, we m...