Solaris 11.x network intro

In this post I’ll mainly cover basic network topics from Solaris 11.x, I will cover the following:

  • Identifying the network interfaces
  • Setting up a static IP
  • Define IPMP, Link-Based and transitive probes

If you are new to Solaris 11, then do not be afraid with the nomenclature of the network interfaces, what I mean is that by default all the interfaces are net0…X.
This was done to simplify the system administrators life’s when compared to the previous version of Solaris.

Moving on to the main topics.

To identify and see what network interfaces we have available, we need to use the dladm command.

bitsnix@solaris:~$ dladm show-phys
net0 Ethernet up 1000 full e1000g0
net1 Ethernet up 1000 full e1000g1
net2 Ethernet up 1000 full e1000g2

If we need to MAC-ADDR we should use dladm show-phys -m, old good ifconfig is still there but dladm provide this information in a clear way.

bitsnix@solaris:~$ dladm show-phys -m
net0 primary 8:0:27:8c:c9:28 yes net0
net1 primary 8:0:27:11:c:4f yes net1
net2 primary 8:0:27:ce:e7:77 no --

In this case, I have three network interfaces that use the same driver, e1000g, but from the output of dladm,I can see that only net0 and net1 are in use.
Using ipadm will in fact confirm this:

bitsnix@solaris:~$ ipadm
lo0 loopback ok -- --
lo0/v4 static ok --
lo0/v6 static ok -- ::1/128
net0 ip ok -- --
net0/v4 static ok --
net1 ip ok -- --
net1/v4 static ok --

So let’s define a new ip for net2, but first, I want to associate this interface with the network that it will be used for.

bitsnix@solaris:~$ pfexec dladm rename-link net2 backup0

Now let’s give a IP to this link:

bitsnix@solaris:~$ pfexec ipadm create-ip backup0
bitsnix@solaris:~$ ipadm
backup0 ip ok -- --
backup0/if0 static ok -

3. Creating IPMP
Initial steps

bitsnix@solaris:~$ pfexec ipadm create-ip net1
bitsnix@solaris:~$ pfexec ipadm create-ip net2
bitsnix@solaris:~$ pfexec ipadm create-ipmp -i net1 -i net2 backup_ipmp0
bitsnix@solaris:~$ ipadm
backup_ipmp0 ipmp down -- --
net1 ip ok backup_ipmp0 --
net2 ip ok backup_ipmp0 --

The initial state is down. This happens because we don’t have yet a ip address for it.
So .. let’s give it an ip address:

bitsnix@solaris:~$ pfexec ipadm create-addr -T static -a local= backup_ipmp0/n0
bitsnix@solaris:~$ ipadm
backup_ipmp0 ipmp ok -- --
backup_ipmp0/n0 static ok --
net1 ip ok backup_ipmp0 --
net2 ip ok backup_ipmp0 --

The default value when creating a IPMP is active-active mode. we can confirm this by using ipmpstat command

bitsnix@solaris:~$ ipmpstat -i
net2 yes backup_ipmp0 ------- up disabled ok
net1 yes backup_ipmp0 --mbM-- up disabled ok
bitsnix@solaris:~$ ipmpstat -g
backup_ipmp0 backup_ipmp0 ok -- net2 net1

If we want to enable active-passive:

bitsnix@solaris:~$ pfexec ipadm set-ifprop -p standby=on -m ip net2

Can be confirmed now, by using ipmpstat command. Note that when using the option -g, the interfaces between “()” are standby, “[]” failed, nothing are active.

bitsnix@solaris:~$ ipmpstat -g
backup_ipmp0 backup_ipmp0 ok -- net1 (net2)
bitsnix@solaris:~$ ipmpstat -i
net2 no backup_ipmp0 is----- up disabled ok
net1 yes backup_ipmp0 --mbM-- up disabled ok

To enable transitive probes:

bitsnix@solaris:~$ svccfg -s ipmp setprop config/transitive-probing=true
bitsnix@solaris:~$ svcadm refresh ipmp
bitsnix@solaris:~$ svcadm restart ipmp

You can confirm now using ipmpstat -t

bitsnix@solaris:~$ ipmpstat -t
net2 transitive <net2> <net1>
net1 multicast --

I really advice you to read the official docs and also the man pages for dladm, ipadm and ipmpstat to have a better understanding of this intro.
A few set of commands useful are really the following:

- ipadm set-ifprop/show-ifprop
- dladm create-aggr/show-aggr
- dladm show-phys -L -p -o <fields>

ZFS Cheatsheet

This is a zfs cheatsheet that I’ll be able to keep it for future reference. The cheatsheet will be divided into two parts, zpool comands and zfs commands.

Zpool related

To create a mirror ( min two devices )
zpool create mpool mirror   

To create a stripe ( min 1 device )
zpool create spool   

To create raidz (min 3 devices )
zpool create raidzpool raidz    

Obtain all properties from a zpool
zpool get all pool_name

ZFS related

To create normal zfs filesystem
zfs create -o mountpoint=/path/to/mount testpool/example1

To create a snapshot
zfs create snapshot testpool/example1@snap_name

To create a recursive snapshot
zfs create -r snapshot testpool/example1@snap_name

To delete a snapshot
zfs delete testpool/example1@snap_name

To delete recursively snapshot
zfs delete -r testpool@snap_name 

To send a snapshot to another host.
zfs send testpool/dataset@snap_name |ssh remote_host zfs recv testpool2/new_dataset@snapshot

To send a dataset and all of it's descendants datasets with options
zfs send -vR testpool@backup |zfs receive -vd -F test2pool

Defining several option while creating a zfs dataset.
zfs create -o quota=10G -o recordsize=8k -o mountpoint=/mnt/test,compression=on,dedup=on rpool/test_ds

Obtain all the properties for a zfs dataset
zfs get all pool_name/dataset

NOTE: While creating a new dataset with options, the ones that require numbers like quota and recordsize need to have separate -o.

Graphite and collectd

So …  Recently and due to a need from the company I work for, I had to search for a tool that could gather all the metrics from several systems. Mainly the focus is about centralization.

Initially I had the idea of sar, but most of us know, sar is single system only, meaning that I would have to run sar in multiple systems and then grab the output files from each system and treat the data to plot the graph.
Since we are converging into a centralized form I notice one tool that is being used by the development team to gather metrics from their applications, the tool is Graphite. This tool is indeed pretty good, it allow us to have a central system to look at the data. It allows us to have a centralized point and also merge graphs.

In our particular case, we ended up but having Graphite and an extra tool for metrics gathering .
What is collectd? It is our core so to speak, collectd is responsible for the collection of the metrics where the agent is running. This agent has an enormous amount of plugins that allow you do gather statistics on almost everything. Check here for the list of plugins . Also, you can have relays agents in case you actually need then or

Collect has the ability to forward all the data to graphite. At the present time, the plugin is write_graphite, so whatever you configure, all your metrics will be sent to graphite and you can easily look at them in a graphical format.

tmux and screen

So which one do you prefer?

Personally I love tmux mainly for these 2 big reasons:

  1. ability to split windows vertical or horizontal, useful when in need to debug a program or script.
  2. configuring it according to my taste is a lot easier than screen.

So inline will be example for the config for both software’s:

Screen – for most of my case scenarios this is more than enough, but if you want to play around with the settings and configs then you will have a hard time digging in screen manual. Although I search online before I was happy, I ended up by having this config for my screen:

altscreen on
startup_message off
caption always '%{= dr} %H %{G}| %{w}%l %{G}|%=%?%{d}%-w%?%{r}(%{b}%n %t%? {%u} %?%{r})%{d}%?%+w%?%=%{G}| %{y}%M %d %c:%s '
defscrollback 5000

Tmux – tmux does what screen does but it’s configuration is a lot easier to perform. As you can see by my config, it is human friendly unlike screen. For this config to work as expected, you should have an alias for tmux as ‘tmux -2’.

set -g default-terminal "screen-256color"
set -g history-limit 10000

# Status Bar
set-option -g status-utf8 on
set-option -g status-justify centre
set-option -g status-bg black
set-option -g status-fg cyan
set-option -g status-interval 3
set-option -g status-left-length 50
set-option -g status-left ‘#[fg=red]» #[fg=blue]#h# #[fg=colour128]#(uptime |cut -d ‘,’ -f 3,4,5)#[default]’
set-option -g status-right ‘#[fg=red] #[fg=cyan] »» #[fg=blue]###S #[fg=colour184]%R %d-%m-%y#2#[default]’

set-option -g visual-activity on
set-option -g set-titles on
set-option -g set-titles-string ‘#H’

set-window-option -g window-status-current-bg red
setw -g mode-keys vi

bind-key : command-prompt
bind-key r refresh-client
bind-key L clear-history
bind-key v split-window -h
bind-key h split-window -v

The new generation of Unix System Administrators …

I am currently a System Administrator and I’ve been working in it for 12 years now. During the time that I started working, all the systems administrators shared a common characteristic, we were all curious about system administration. There could be already a way of performing a task but we would look for ways to improve this.

Now, here comes the “funny” part … nowadays, Unix System Administrator are simply lazy1 with lack of commitment and no ambitions.
Sometime I wonder if they are in IT due to this way of thinking: “IT is well paid”. I wouldn’t be surprise that this is the case for a few of people as it has already happened in a not so far away past.

Some example are as follows:
“Tell me where I can download “, “How can I add a username?”, “Tell me how to update the this system!”, “Where can I find the log files?” and very similar things to these.

Honestly, most of the times I would like to be able to just write RTFM … this is something that it doesn matter in which area of IT you are in, you NEED to read the manual of the command or of the operating system that you work with.

Long time ago, when I was still studying, one of my teachers said this “A system administrator is just as a medical doctor, it has to be always up to date with the current situation and are advancements!”

Vim for perl and python development

Long time since I’ve been here but it happens …

Due to my employment changes and new areas of expertise, I found myself with the need to learn Perl and Python.

I am currently searching for plugins to work with Python for syntax checking since there is already a plugin for Perl that does exactly this.

So let’s start with Perl. As many of you know, Perl is very well accepted across sysadmins worldwide and has a very big library to back it up. So, if you are developing in perl and you are using Vim as your “IDE” it is a good idea to install the plugin.

From this point on whenever you create a new file with .pl you will be presented with a template. This template can be changed to match your needs. But so far, what I found more interesting on this plugin is when you sequence \rpc, this will evaluate your code according to the book Perl Best Practices. It will point your errors and it will also indicate where in book you can find the information why it is incorrect. As an example:

  1|56 col 1| Always unpack @_ first.  See page 178 of PBP  (Severity: 4)$
  2|72 col 48| “die” used instead of “croak”.  See page 283 of PBP  (Severity: 3)$

Give it a try, it is worth fit, specially if you are learning how to code Perl.

As for Python, I found that it is pretty annoying the indent system where it has to match 4 spaces. But it is as simple as using some options in Vim to make it work for you. Configure your .vimrc to look like this:

set expandtab

set tabstop=4

set softtabstop=4

set shiftwidth=4

set autoindent

set number

set list

The most important are expandtab that converts your tab to spaces and the tabstop, python uses 4 spaces for ident.

Enjoy your coding!! 🙂

Virtual test environment for Cluster 3.2

It’s been a while, yes as you can see I’m not a full tine blogger. 🙂
Anyway, due to the nature of my work I need to test some stuff first before applying them live. As such it’s a bit expensive to have several servers just for that.
It’s also good if we want to test something new like really brand new, a new feature, a patch that gets more things done and so on or even to get ready for a certification exam.
So this post will be mostly about the initial setup using the following Software

      1. Host: My laptop with Fedora 13 x86_64


      2. Virtualization Software: Oracle Virtual Box 3.2.8


      3. Arquichiture: 2-node cluster


      4. Software: Oracle Cluster 3.2u3


    5. Operating System: Oracle Solaris 10/09 x86

Initial difficulties that I had:

      1. Where would I put the /globaldevices filesystem?


      2. How would I configure the interconnect interface?


    3. What would be the quorum device?

And the answer that I found was as follows:

      1. Searching a bit I got the following link for the In here it states that we can use globaldevices with ZFS.

    2. Another thing I had doubt with was regarding the interconnect. At least the Virtual Box has a internal network interface and a host only interface. Since in the case of the interconnect they have to be completely isoladed from any traffic I had to create two internal networks. The command used was:

VBoxManage –modifyvm –intnet1 interconnect1
VBoxManage –modifyvm –intnet1 interconnect1
VBoxManage –modifyvm –intnet2 interconnect2
VBoxManage –modifyvm –intnet2 interconnect2

Afterwards in the GUI of VBOX just needed to add the interfaces adapter 2 and 3 to match the interconnects.
3. As for the Quorum device … well in here I had 3 choices:

        3.1 Quorum disk device


        3.2 Nas Server – Requires another vm


      3.3 Quorum server

I ended up going with Quorum disk device thanks to Virtual Box being able to share the same disk across Virtual Machines.
The requisites to be able to do this are:

      1. The disk image file


      be dinamycally.


      2. Before applying the disks to a Virtual Machine we need to execute the following command:

VBoxManage modifyhd --type shareable

After applying the change you can had normally to the Virtual Machine. In my case I added a SAS controller to each Virtual Machine and added the shared devices under that SAS controller.

Now …
From this point forward is quite simple. You just need to install each system accordingly.
So …

      1. Install Solaris in each node.


      2. Permit ssh login from one node to the other with ssh exchange keys

        2.1. Edit PermitRootLogin=no to yes in /etc/ssh/sshd_config


        2.2. Generate the ssh keys for user root:


        solaris10-1# ssh-keygen -t rsa


        2.3. Copy the .pub file to second node


        solaris10-1# scp root@solaris10-2:/.ssh/authorized_keys


        2.4. Repeat steps 1 to 3 from the second node

3. Install Cluster Software in each node and perform initial configuration.
NOTE: When performing the configuration it will ask for the interconnect devices. Don’t forget to specify the interconnects as the previous adapters that are under the Virtual Box internal network.
4. After all done, it’s time to specify the quorum device.

        4.1 Properly idenfify the shared devices. In my case they were under c2*


        4.2 Label each disk to be recognized by Solaris


        4.3 Create partitions as you wish.


      4.4 Execute in both nodes cldev populate; cldev show

From this point forth you are ready to toy a bit with your two node cluster. 🙂
By the way, you will have several messages regarding the system being unable to write the the keys to the quorum, at this point I haven’t looked into it properly. I will update when I have the time to take a depth look into it.