iSCS target & multipathing with FreeBSD 10.3

It’s time to create inexpensive storage system which will share ZFS block devices uses native iSCSI target with MPIO. I think that I could use something like FreeNSD, or NAS4Free… but there is no fun. 🙂

To reach my goal i need server with some disk space that I will make available to other servers (iSCSI initiators) and few network interfaces that will implement paths uses for multipathing. I have Dell PowerEdge R720 with twelve SAS (10kRPM) disks, four SSD disks, quad-core CPU and 48GB of RAM, and two 10GbE NICs.

There is no point to describe the process on system installation, let’s assume that there is already installed and upgraded FreeBSD 10.3-RELEASE.

  1. iSCSI interfaces and routing configuration
    ISCSI targert with MPIO on FreeBSD is not a trivial case. The main problem is balancing outgoing iscsi traffic with all configured interfaces. I tried to configure policy based routing with FIBs and PF, which I like the most, but with no success. I can not find the way to force the ctld(8) to send tcp packets by two interfaces configured in the same network. It’s obvious, because there is no way to make the ctld(8) to use two diffrent FIBs in the same time. I also tried to use round-robin link aggregation but it’s not nice solution. Finally the simplest solution proved to be the best. I added routes to main routing table to specific IPs through specific gateways.

    target# ifconfig bge1 inet netmask mtu 9000
    target# ifconfig bge3 inet netmask mtu 9000
    target# route add -host -ifa
    target# route add -host -ifa

    The important thing is mtu 9000, this enables support for Jumbo Frames on iSCSI NICs.
    The main routing table now have two different paths to the same host which is iSCSI initiator.

    target# netstat -nr
    Routing tables
    Destination        Gateway            Flags      Netif Expire
    default           UGS       lagg0
    [...]      UGHS       bge1      UGHS       bge3
    [...]          link#5             UH          lo0
  2. ZFS block device
    Before creating zfs(8) block device, create ZFS pool with zpool(8) command. But in this particular server, there is PERC H710 Mini controller, that does not support JBOD, or non-RIDE access to phisical disks. I have to create sixteen RAID0 virtual disks, and then create the ZFS pool. Fortunately, in FreeBSDs is available MegaCLI port, which provides CLI to configure RAID controller.

    # zpool create storfs raidz mfid0 mfid1 mfid2 mfid3 mfid4 mfid5 \
      raidz mfid6 mfid7 mfid8 mfid9 mfid10 mfid11 \
      cache mfid12 mfid13 \
      log mirror mfid14 mfid15
    # zfs set mountpoint=none storfs
    # zfs create storfs/iscsitargets
    # zfs set dedup=on storfa/iscsitargets
    # zfs create -V 1T storfs/iscsitargets/it01l0
  3. Native iSCSI target daemon
    ctld(8) is a CAM target layer daemon that handles incoming iSCSI connections. My configuration is simple and based on samaple configuration described in ctl.conf(8). The main thing is listen variable, that determines to which IP addresses ctld(8) have to bind.

    target# cat /etc/ctl.conf
    portal-group pgroup0 {
    	discovery-auth-group no-authentication
    target {
    	alias "target01"
    	auth-group no-authentication
    	portal-group pgroup0
    	lun 0 {
    		path /dev/zvol/storefs/iscsitargets/it01l0
    		blocksize 4096
    		size 1T

    To start ctld(8) add ctld_enable="YES" to /etc/rc.conf and run command /etc/rc.d/ctld start. After that sockstat(1) command should return something like this:

    target# sockstat | grep 3260
    root     ctld       987   6  tcp4    *:*
    root     ctld       987   7  tcp4    *:*
  4. iSCSI initiator
    Ubuntu 12.04.5 LTS x86_64 is installed on the server on other side of iSCSI SAN network. Before logging on to the target I have to configure two NICs in the same network, policy based routing with iproute2, and install multipath-tools, for the multipathing purposes. I mention them here but I’m not going to describe, because it is not the subject of this post. Let’s assume that there is two NICs configured for iSCSI network with IPs:


    also assume that there is a policy base routing configured on the initiator

    initiator# ip route show
    default via dev bond0  metric 100 
    [...] dev eth1  proto kernel  scope link  src dev eth3  proto kernel  scope link  src

    and finally that the multipath-tools are installed in the initiator system.

    I used open-iscsi to connect to configured above iSCSI target. First I have to set up the iSCSI interfaces…

    initiator# iscsiadm -m iface -I eth1 -o new
    initiator# iscsiadm -m iface -I eth1 -n iface.iscsi_ifacename=eth1 -o update
    initiator# iscsiadm -m iface -I eth3 -o new
    initiator# iscsiadm -m iface -I eth3 -n iface.iscsi_ifacename=eth3 -o update

    … second is discovery paths to the iSCSI target:

    initiator# iscsiadm -m discovery -t st -p -P1
    		Iface Name: eth3
    		Iface Name: eth1
    		Iface Name: eth3
    		Iface Name: eth1
    initiator# iscsiadm -m discovery -t st -p -I eth1 -I eth3,2,2,2,2

    … and third is login to iSCSI target.

    initiator# iscsiadm -m discovery -t st -p -l

    Note that there is only two phisical paths, because both servers have two network interfaces configured for iSCSI traffic, but iscsiadm(8) report four paths. This is due to the fact that the initiator can assemble four TCP sessions with target.

    After that i can check established iSCSI sessions:

    initiator# iscsiadm -m session -o show
    tcp: [1],2
    tcp: [2],2
    tcp: [3],2
    tcp: [4],2
    initiator# multipath -l
    size=20T features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-1 status=active
      |- 9:0:0:0  sdd 8:48  active undef running
      |- 7:0:0:0  sdb 8:16  active undef running
      |- 8:0:0:0  sdc 8:32  active undef running
      `- 10:0:0:0 sdh 8:112 active undef running
  5. Performance test with bonnie++
    I will publish test soon. 🙂


  1. iSCSI Initiator and Target Configuration
  2. multiple routing tables roadmap
  3. Source Based Routing With FreeBSD Using Multiple Routing Tables

Nagios – Checking host alive with NMAP

In complicated IT environment, where there is many networks, vlans, different operating systems, rigorous firewall and access list rules, it can be difficult to check host state with check_ping, which is the default detection method in Nagios. Moreover Windows machines have disabled ICMP protocol on firewall. I want to implement some method of auto-discovery in Nagios to save time to build monitoring system for the common operating systems and services. I will write the different post about auto-discovery. Now I want to focus only on the more reliable method of checking host state.

I wrote a simple python script that uses python nmap nmap, and implement it in CentOS7 environement. If You want to implement below script in other operating system, You have to install python NMAP module, with pip command.

Script takes as the argument the host name or ip address, and perform the nmap scan like:

$ namp -sn <hostname|ipaddress>

Script code is not pretty, there is also WARNING, and UNKNOWN state missing, but for purposes of this post is sufficient. Mayby in the future I will rewrite this check.

#! /usr/bin/env python
# -*- coding: utf8 -*-
import argparse
import nmap

parser = argparse.ArgumentParser(description="Process some options.")
parser.add_argument('-H', help="Hostname or IP address to check.")
args = parser.parse_args()

def main():
  nm = nmap.PortScanner()
  nm.scan(hosts=args.H, arguments='-sn')
  result = [nm[host] for host in nm.all_hosts()]

  if result == []:
    return 2
    print "NMAP HOST ALIVE: OK"
    return 0

if __name__ == "__main__":

The check script should be placed in $USER1$ path defined in Nagios resources, or in the different path, but then You have to define in Nagios the new resource containing full path to the script folder. Obviously, script should have the execution permissions. For example in CentOS 7:

$ sudo cp check_nmap /usr/lib64/nagios/plugins/check_nmap
$ sudo chmod 755 /usr/lib64/nagios/plugins/check_nmap

Before going futher, check the execution of the script:

/usr/lib64/nagios/plugins/check_nmap -H

. This should return either “NMAP HOST ALIVE: OK” or “NMAP HOST ALIVE: CRITICAL”.

Now, in Nagios configutation, it should be defined command which will use the my check script, and to this command should refer the check_command option in host object. The command definition:

define command {
    command_name check-host-alive-nmap
    command_line $USER1$/check_nmap -H $HOSTADDRESS$

and the host object definition:

define host {
    host_name monitored-hostname
    alias     monitored-hostname
    check_command check-host-alive-nmap

So, when all above is done, Nagios configuration should be checked with command

# nagios -v /etc/nagios/nagios.cfg

, and when the configuration is ok, then restart nagios

# systemctl restart nagios

. All hosts that appeared as “DOWN” state in Nagios web interface, should start changing state to “UP”.


Replace admin (or any user) password in dokuwiki… on Debian.

I recently asked google for a admin password restoration in dokuwiki. Obviously, I foud the answare, but incomplete for me.

In Debian Dokuwiki engine is stored at /usr/share/dokuwiki, but data files of application are stored at /var/lib/dokuwiki. In this directory, in subdirectory ./acl, You can find file users.auth.php. This file is something like secret passwd file in UNIX, it contains usernames and passwords hashes. Dokuwiki uses MD5 algorithm to encrypt passwords. To replace hash of forgoten passowrd with hash of new password, You have to run command:

# mkpasswd -m md5 secretpassword

and paste the output hash into the metioned file users.auth.php in place of old admin password hash. Save the file, reload dokuwiki page, and log in.

Redirecting fetching data stream to unarchive program

In some cases, one would like to have possibility to redirect the fetching data stream to unpack program instead storing it to the disk, then unpacking to specific folder and then removing fetched archives. This is, for example, when one would like to fetch the Freebsd base system and lib32 and unpack it into jail directory. This can be done with commands:

# fetch -q -o - | tar -C /jails/vms/${jailname}/ -xzvf -
# fetch -q -o - | tar -C /jails/vms/${jailname}/ -xzvf -


  1. fetch(1) man page
  2. tar(1) man page

Lid switch state change depending on whether the power cable is connected

In most desktop operating systems, there is an option to set what system should do (suspend, hibernate or nothing) when power cable is connected or disconnected and laptops lid open or closed. However, in FreeBSD when using Gnome3 desktop environment, this option is missing.

Probably it’s beacause latest versions of FreeBSD uses devd isntead hald to provide real-time device state discovery. I think that FreeBSD team will add this feature in the future, but in meantime You can try my brief instruction to add this missing option. Treat this as an idea or example of how the problem can be solved.

I wrote a simple c-shell script that uses sysctl command to change hw.acpi.lid_switch_state depending on notify returned by devd.

#!/bin/csh -f

set oidname = 'hw.acpi.lid_switch_state'
set logger = 'logger -t lid_switch_state -p daemon.notice'
set actual_suspend_state = `sysctl -n ${oidname}`

if ( `test $# -ne 1` ) then
	echo "Usage: $0 [0x00|0x01]"
	exit 1
        set state = ${1}

switch (${state})
case 0x01:
	set suspend_state = 'NONE'
	${logger} 'power cable connected'
case 0x00:
	set suspend_state = 'S3'
	${logger} 'power cable disconnected'
	echo "Usage: $0 [0x00|0x01]"
       	exit 1

test -n ${suspend_state}
if ( $? == 0 ) then
	sysctl ${oidname}=${suspend_state} >& /dev/null
	${logger} "${oidname}: ${actual_suspend_state} -> ${suspend_state}"
	${logger} 'suspend_state value is zero lenght'

exit 0

Copy above code to /usr/local/scripts/lid_switch_state file, make it executable, and test. In one terminal window run the script and in the second watch message log. Command:

# /usr/local/scripts/lid_switch_state 0x00

should return in message log:

Apr 24 12:03:38 sashagrey lid_switch_state: power cable disconnected
Apr 24 12:03:38 sashagrey lid_switch_state: hw.acpi.lid_switch_state: NONE -> S3

and commnad:

# /usr/local/scripts/lid_switch_state 0x01

should return in message log:

Apr 24 12:04:51 sashagrey lid_switch_state: power cable connected
Apr 24 12:04:51 sashagrey lid_switch_state: hw.acpi.lid_switch_state: S3 -> NONE

If all goes well, it’s time to configure devd. You need to add the following piece of configuration to /etc/devd.conf file:

notify 11 {
	match "system"		"ACPI";
	match "subsystem"	"ACAD";
	action "/usr/local/scripts/lid_switch_state $notify";

below this section:

# Switch power profiles when the AC line state changes.
notify 10 {
        match "system"          "ACPI";
        match "subsystem"       "ACAD";
        action "/etc/rc.d/power_profile $notify";

… and You need to restart devd:

# /etc/rc.d/devd restart

Now, try to unplug the power cord from Your laptop and check the message log and hw.acpi.lid_switch_state. When it's changed to 'NONE' then You can close the lid and laptops will not suspend, and when it's changed 'S3' then Your laptop will suspend when You close the lid.


  1. devd(8) man page
  2. thread on FreeBSD forum

CFEngine on FreeBSD 10.1 – ‘masterfiles’ hint and cf-serverd quick start

Recently I was looking for the system for automatic managing servers via local network. One of those that I tried is CFEngine. Instalation was easy, but I had a problem with self bootstraping of policy server. Masterfiles that comes with package are not suitable for version 3.6.5 that installs with sysutils/cfengine and bootstraping the policy server returns a lot of syntax errors in masterfiles.

Solution to the problem is to fetch proper masterfiles from CFEngine download page, and untar them to /var/cfengine/ directory… or to clone from github, and to install them according to instructions in file.

Note that the masterfiles distributed by CFEngine isn’t realy required to bootstrap policy server. The minimum requiments is to create in /var/cfengine/masterfiles with some example promise like ‘hello world’. Nonetheless it’s better to use CFEngines masterfiles, because they do some things during bootstraping, for instance they copy binary cf-* files from /usr/local/sbin/ to /var/cfengine/bin/ and link them back to default installation folder.

Ok, this was a ‘masterfiles’ hint… and now let’s start a cf-serverd.

  1. Install cfengine
    # pkg install cfengine
    # rehash
    # cf-promises -V
    CFEngine Core 3.6.5
  2. Post-instalation tasks
    # cd /var/cfengine
    # cf-key
    # cp /usr/local/sbin/cf-* bin/
    # fetch
    # tar xzvf masterfiles-3.6.5.tar.gz
    # rm masterfiles-3.6.5.tar.gz
  3. Bootstraping policy server
    # /var/cfengine/bin/cf-agent -B
    2015-04-20T10:22:08+0000   notice: Bootstrap to '' completed successfully!

After that sockstat should return:

root     cf-serverd 76522 4  stream -> ??
root     cf-serverd 76522 5  stream -> ??
root     cf-serverd 76522 6  tcp4     *:*
root     cf-serverd 76522 7  dgram  -> /var/run/logpriv


  1. Community version installation guide
  2. CFEngine masterfiles on github
  3. thread about automation tools on FreeBSDs forum

Install commercial SSL certificate for Zimbra

I had to install global SSL certificate on my Zimbra servers. Notice it’s not the separate certificate for domain handled by Zimbra. The main purposes is to encrypt https connections to webclient.

I already have a SSL certificate, signed by commercial CA. Following steps describes the certificate installation process:

  1. Copy key file to /opt/zimbra/ssl/zimbra/commercial/commercial.key
  2. Copy CA certificate and Your certificate to root’s home on zimbra server. Let’s named them commercial_ca.crt and commerciel.crt
  3. Verify and install certificates by running below commands as root (make sure You are in root’s home):
    zimbra# /opt/zimbra/bin/zmcertmgr verifycrt comm /opt/zimbra/ssl/zimbra/commercial/commercial.key commercial.crt commercial_ca.crt
    Valid Certificate: commercial.crt: OK
    zimbra# /opt/zimbra/bin/zmcertmgr deploycrt comm commercial.crt commercial_ca.crt
    ** Verifying commercial.crt against /opt/zimbra/ssl/zimbra/commercial/commercial.key
    Certificate (commercial.crt) and private key (/opt/zimbra/ssl/zimbra/commercial/commercial.key) match.
    Valid Certificate: commercial.crt: OK
    ** Copying commercial.crt to /opt/zimbra/ssl/zimbra/commercial/commercial.crt
    ** Appending ca chain commercial_ca.crt to /opt/zimbra/ssl/zimbra/commercial/commercial.crt
    ** Importing certificate /opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt to CACERTS as zcs-user-commercial_ca...done.
    ** NOTE: mailboxd must be restarted in order to use the imported certificate.
    ** Saving server config key zimbraSSLCertificate...done.
    ** Saving server config key zimbraSSLPrivateKey...done.
    ** Installing mta certificate and key...done.
    ** Installing slapd certificate and key...done.
    ** Installing proxy certificate and key...done.
    ** Creating pkcs12 file /opt/zimbra/ssl/zimbra/
    ** Creating keystore file /opt/zimbra/conf/keystore...done.
    ** Installing CA to /opt/zimbra/conf/ca...done.

    If the Zimbra is multiserver installation, above should be performed on all servers.

  4. After that, mailboxd should be restarted to use imported certificate. I installed my cert on zimbra proxy server, so there is no mailboxd running. I restarted all installed zimbra services.
    # sudo -u zimbra -i
    $ zmcontrol restart

Mencoding video files to xBox360 compatible formats

After a hours of googling I figured out how to convert video files to the xBox360 compatible formats. I created mencoder configuration file that contain two profiles. One of them is a base profile that is used to convert any video file to xBox360 compatible avi format, and the other profile is used to recode video files with subtitles in txt format.

This is my mencoder.conf file:

profile-desc="Produces xBox360 compatible avi format."

profile-desc="Produces xBox360 compatible avi format with subtitles."

Place the mencoder.conf file into ~/.mplayer directory, and run command:

$ mencoder -profile xBox360sub -sub subtitles_filename.txt \
-o filename_with_subtitles.avi video_filename

Modify ACL for shared folder with zmmailbox or zmprov

Let’s say that there is shared folder, named SharefFolder, from to, and the user of has insufficient privileges. To check permissions of the shared folder log into server as zimbra user and run command:

zimbra$ zmmailbox -z -m gfg -v "/SharedFolder"
     "id": "c75ba666-90c8-42fd-f981-c12dc123fa3a",
     "name": "",
     "permissions": "rwix",
     "type": "usr"

We are intrested in the line with permissions. As You can see, user of can read, write, insert, and workflow actions. But user wants to have the persmission to delete. To modify the existing permissions You have to run the following command as a zimbra user:

zimbra$ zmmailbox -z -m mfg "/SharedFolder" \
account "" rwdix

Now, if We want to do such modification with zmprov command:

zimbra$ zmprov sm gfg -v "/SharedFolder"
zimbra$ zmprov sm mfg "/SharedFolder" \
account "" rwdix

Finally, if We want to do such modification for a large number of accounts, it is better to put all zmprov commands, as described above, except “zmprov” word, into single text file and run as zimbra user command:

zimbra$ zmprov -f /path/to/file/with/commands


  1. zmmailbox – Zimbra wiki
  2. zmprov – Zimbra wiki

Connecting via OpenSSH to Avayas ERS 5xxx

Enabling ssh server on Avayas ERS 5xxx is easy, but connecting to it via modern version of OpenSSH could be more complicated. In my case, my ssh client version is OpenSSH_6.6.1p1, OpenSSL 1.0.1j-freebsd 15 Oct 2014, and had problems with ciphers and kex-algorithms negotiation. The connection was closing by remote host after initialisation of kex-algorithms negotiation, and I can not log in to the device.

debug1: sending SSH2_MSG_KEXDH_INIT
debug1: expecting SSH2_MSG_KEXDH_REPLY
Connection closed by

Problem solves configuration of ssh client… but let’s start from the beginning.

Here are two configuration steps that enables connection with OpenSSH to Avayas ERS 5xxx device.

  1. Configuring ssh server on ERS 5520
    For simplicity I asume that there is no need strong passwords with minimum lenght, minimum number of digits, capitals and special characters. Therefore I disabled password security. It should be noticed that password security is enabled by default in firmware with “s” (secure).

    ERS5520#configure terminal
    ERS5520(conf)#no password security
    ERS5520(conf)#cli password read-rwite secretrwpassword
    ERS5520(conf)#cli password read-only secretropassword
  2. Configuring ssh client to use a proper ciphers and kex algorithms
    Unfortunetly, after some time of searching the web, reading, and testing I found that the switch firmware with is installed on my device uses a legacy ciphers (3des-cbc) and kex-algorithms (diffie-hellman-group1-sha1).
    My ssh config for that device looks like below:

    $ cat .ssh/config
    # Avaya 5520
    Host ERS5520
         User RW
         Ciphers 3des-cbc
         KexAlgorithms diffie-hellman-group1-sha1

Now the negotitation of ciphers, kex-algorithms going well, connection isn’t closed, and finally You can log in to the switch with configured password.