Testing different SSL certificates on one IP address

It’s not a “full size” post, it’s rather a note for myself, for case I do not want to search the web. 🙂

I have a number nginx configurations, with different SSL certificates, for different domains on one IP addess. Instead of searching google, or openssl(8) documentation, every time I need to test the certificate, I decided to post here this command:

openssl s_client -showcerts -connect IP_ADDRESS:443 -servername DONAIN.TLD

Replace DOMAIN.TLD with domain You want to check…

Using if statemant in crontab jobs

Using cron jobs, in general, is very simple. In most cases, just place executable shell script into /etc/cron.{d,hourly,daily,weekly} directory, or put command with execution time to crontab -e editor. But there are situations, when someone want to run cron job under some conditions. First thought is to add if-statement into cron script placed in, one of mentioned above, folders. However, it is possile to add a condition to users cron jobs configuration, using crontam -e.

Pleace, notice that, I’ve used a libne breaks (backslash) in code blocks of this post, but they cannot be used in real cron jobs configuration. Cron job definitions have to be in one line.

Here i have an example. Let’s say, there is a command, that should be run only if there is no cronjob.lock file.

* * * * * [ ! -f /tmp/cronjob.lock ] && \
  echo "$(date): lock file does not exists" >> /tmp/cronjob.out

Above example looks like this, because processing conditions is based on return codes of conditional expression. So if [ ! -f /tmp/cronjob.lock ] returns 0, command will be run, and if test statement returns non-zero exit status, command will not be run. I’ve checked, that You can use classic if-statement syntax, and it’s working, but it’s not a populat aproach.

* * * * * if [ ! -f /tmp/cronjob.lock ] ; then \
  echo "$(date): lock file does not exists" ; \
  fi >> /tmp/cronjob.out

Next level of complication is, to add else block of conditions. It’s simple, and intuitive when using classic if-statement syntax, but in case of using AND and OR list operators, it’s a little bit complicated. Crucial is to understend how the pipelines works. It’s well documented in bash(8) manual, in section Pipelines and Lists.

* * * * * [ ! -f /tmp/cronjob.lock ] && \
  echo "$(date): lock file does not exists" >> /tmp/cronjob.out || \
  echo "$(date): lock file exists" >> /tmp/cronjob.out

Above cron job will write to output file date with adequate information, regarding to test statement return code.


XtraBackup/mariabackup and large number of files descriptors

Few days ago, I have to establish replication between number of nodes of MariaDB servers… boring. I decided to use mariabackup, which is equivalent of xtrabackup for MariaDB. I’ve never used this tool before, so it was surprising for me, that instead of backup I get the error “innoDB: Error number 24 means ‘Too many open files'”.

I spend a number of minutes searching this error in google, and found the solution (Eureka!). This error appears often when in configuration of MariaDB server is enabled option “innodb_file_per_table”. Enabling this option makes MariaDB stores database tables to separate files, so it’s obvious that operating system have to open large amount of file descriptors.

In my case, the only thing I have to do was add to mariabackup command option ‘–open-files-limit’, but in more “extreme” situations, You have to do things described in the link at the end of this post.

mariabackup --backup \
  --open-files-limit=$(expr $(find /var/lib/mysql/ -name "*.ibd" | wc -l) \* 2) \
  --target-dir=/opt/mariadb-backup/$(date "+%Y%m%d%H%M%S")/


Extending disk in Linux virtual machines

Every time, I need to resize partition of Linux virtual machine (what I do very rarely), hosted on VMware Hypervisior, I ask google about the exact procedure. A lot of links describes how to add new block device to virtual machine and extend logical volume by adding new device to volume group. This approach causes large number of disk files on datastore, what is not recommended by VMware Best Practices. The second thing I found in google is removal and re-creation of partition, what I consider to be risky in a production environment, and (if I’m not mistaken) it requires unmounting partition.

I decided to write procedure description without mentioned disadvantages. Below description involves enlargement disk file in virtual machine settings, creating new partition using (for example) fdisk, extending volume group and logical device using lvm commands, and finally online resizing partition.

To safely, easily and online, enlarge the Linux partition, follow below steps:

  1. Resize disk in virtual machine settings
    For ESX 3.5 or later:

    • Open VMware Infrastructure (VI) Client and connect to VirtualCenter or the ESX host.
    • Right-click the virtual machine.
    • Click Edit Settings.
    • Select Virtual Disk.
    • Increase the size of the disk.
      Note: If this option is greyed out, the disk may be running on snapshots or the disk
      may be at the maximum allowed size depending on the block size of the datastore.

    Follow the steps in Increasing the size of a disk partition (1004071) so the guest operating
    system is aware of the change in disk size.

  2. Rescan SCSI bus in Linux OS
    # for device in `ls -1 /sys/class/scsi_device` ; do \
    echo 1 > /sys/class/scsi_device/$device/device/rescan ; done
  3. Create new partition using free space
    # fdisk /dev/sdX

    where X is the letter of the scsi device

  4. Rescan extended block device
    # partprobe /dev/sdX
  5. Use LVM tools to extend logical volumes
    • Initialize disk or partition for use by LV
      # pvcreate /dev/sdXY

      where Y is the partition ID created in step 3.

    • Extend volume group
      # vgextend VGname /dev/sdXY

      Replace VGname with name of the volume group You want to extend.

    • Extend logical volume
      # lvextend -l +100%FREE /dev/mapper/VGname-LVname

      Replace LVname with name of the logical volume You want to extend.

    • Resize filesystem on the LVname
      # resize2fs /dev/mapper/VGname-LVname


  1. VMware Knowledge Base
  2. man 8 pvcreate
  3. man 8 vgextend
  4. man 8 lvextend
  5. Rescaning SCSI bus in Linux
  6. man 8 partprobe

iSCS target & multipathing with FreeBSD 10.3

It’s time to create inexpensive storage system which will share ZFS block devices uses native iSCSI target with MPIO. I think that I could use something like FreeNSD, or NAS4Free… but there is no fun. 🙂

To reach my goal i need server with some disk space that I will make available to other servers (iSCSI initiators) and few network interfaces that will implement paths uses for multipathing. I have Dell PowerEdge R720 with twelve SAS (10kRPM) disks, four SSD disks, quad-core CPU and 48GB of RAM, and two 10GbE NICs.

There is no point to describe the process on system installation, let’s assume that there is already installed and upgraded FreeBSD 10.3-RELEASE.

  1. iSCSI interfaces and routing configuration
    ISCSI targert with MPIO on FreeBSD is not a trivial case. The main problem is balancing outgoing iscsi traffic with all configured interfaces. I tried to configure policy based routing with FIBs and PF, which I like the most, but with no success. I can not find the way to force the ctld(8) to send tcp packets by two interfaces configured in the same network. It’s obvious, because there is no way to make the ctld(8) to use two diffrent FIBs in the same time. I also tried to use round-robin link aggregation but it’s not nice solution. Finally the simplest solution proved to be the best. I added routes to main routing table to specific IPs through specific gateways.

    target# ifconfig bge1 inet netmask mtu 9000
    target# ifconfig bge3 inet netmask mtu 9000
    target# route add -host -ifa
    target# route add -host -ifa

    The important thing is mtu 9000, this enables support for Jumbo Frames on iSCSI NICs.
    The main routing table now have two different paths to the same host which is iSCSI initiator.

    target# netstat -nr
    Routing tables
    Destination        Gateway            Flags      Netif Expire
    default           UGS       lagg0
    [...]      UGHS       bge1      UGHS       bge3
    [...]          link#5             UH          lo0
  2. ZFS block device
    Before creating zfs(8) block device, create ZFS pool with zpool(8) command. But in this particular server, there is PERC H710 Mini controller, that does not support JBOD, or non-RIDE access to phisical disks. I have to create sixteen RAID0 virtual disks, and then create the ZFS pool. Fortunately, in FreeBSDs is available MegaCLI port, which provides CLI to configure RAID controller.

    # zpool create storfs raidz mfid0 mfid1 mfid2 mfid3 mfid4 mfid5 \
      raidz mfid6 mfid7 mfid8 mfid9 mfid10 mfid11 \
      cache mfid12 mfid13 \
      log mirror mfid14 mfid15
    # zfs set mountpoint=none storfs
    # zfs create storfs/iscsitargets
    # zfs set dedup=on storfa/iscsitargets
    # zfs create -V 1T storfs/iscsitargets/it01l0
  3. Native iSCSI target daemon
    ctld(8) is a CAM target layer daemon that handles incoming iSCSI connections. My configuration is simple and based on samaple configuration described in ctl.conf(8). The main thing is listen variable, that determines to which IP addresses ctld(8) have to bind.

    target# cat /etc/ctl.conf
    portal-group pgroup0 {
    	discovery-auth-group no-authentication
    target iqn.2013-07.eu.lapsz:target01 {
    	alias "target01"
    	auth-group no-authentication
    	portal-group pgroup0
    	lun 0 {
    		path /dev/zvol/storefs/iscsitargets/it01l0
    		blocksize 4096
    		size 1T

    To start ctld(8) add ctld_enable="YES" to /etc/rc.conf and run command /etc/rc.d/ctld start. After that sockstat(1) command should return something like this:

    target# sockstat | grep 3260
    root     ctld       987   6  tcp4    *:*
    root     ctld       987   7  tcp4    *:*
  4. iSCSI initiator
    Ubuntu 12.04.5 LTS x86_64 is installed on the server on other side of iSCSI SAN network. Before logging on to the target I have to configure two NICs in the same network, policy based routing with iproute2, and install multipath-tools, for the multipathing purposes. I mention them here but I’m not going to describe, because it is not the subject of this post. Let’s assume that there is two NICs configured for iSCSI network with IPs:


    also assume that there is a policy base routing configured on the initiator

    initiator# ip route show
    default via dev bond0  metric 100 
    [...] dev eth1  proto kernel  scope link  src dev eth3  proto kernel  scope link  src

    and finally that the multipath-tools are installed in the initiator system.

    I used open-iscsi to connect to configured above iSCSI target. First I have to set up the iSCSI interfaces…

    initiator# iscsiadm -m iface -I eth1 -o new
    initiator# iscsiadm -m iface -I eth1 -n iface.iscsi_ifacename=eth1 -o update
    initiator# iscsiadm -m iface -I eth3 -o new
    initiator# iscsiadm -m iface -I eth3 -n iface.iscsi_ifacename=eth3 -o update

    … second is discovery paths to the iSCSI target:

    initiator# iscsiadm -m discovery -t st -p -P1
    Target: iqn.2013-07.eu.lapsz:target01
    		Iface Name: eth3
    		Iface Name: eth1
    		Iface Name: eth3
    		Iface Name: eth1
    initiator# iscsiadm -m discovery -t st -p -I eth1 -I eth3,2 iqn.2013-07.eu.lapsz:target01,2 iqn.2013-07.eu.lapsz:target01,2 iqn.2013-07.eu.lapsz:target01,2 iqn.2013-07.eu.lapsz:target01

    … and third is login to iSCSI target.

    initiator# iscsiadm -m discovery -t st -p -l

    Note that there is only two phisical paths, because both servers have two network interfaces configured for iSCSI traffic, but iscsiadm(8) report four paths. This is due to the fact that the initiator can assemble four TCP sessions with target.

    After that i can check established iSCSI sessions:

    initiator# iscsiadm -m session -o show
    tcp: [1],2 iqn.1999-09.eu.lapsz:target01
    tcp: [2],2 iqn.1999-09.eu.lapsz:target01
    tcp: [3],2 iqn.1999-09.eu.lapsz:target01
    tcp: [4],2 iqn.1999-09.eu.lapsz:target01
    initiator# multipath -l
    size=20T features='1 queue_if_no_path' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=-1 status=active
      |- 9:0:0:0  sdd 8:48  active undef running
      |- 7:0:0:0  sdb 8:16  active undef running
      |- 8:0:0:0  sdc 8:32  active undef running
      `- 10:0:0:0 sdh 8:112 active undef running
  5. Performance test with bonnie++
    I will publish test soon. 🙂


  1. iSCSI Initiator and Target Configuration
  2. multiple routing tables roadmap
  3. Source Based Routing With FreeBSD Using Multiple Routing Tables

Nagios – Checking host alive with NMAP

In complicated IT environment, where there is many networks, vlans, different operating systems, rigorous firewall and access list rules, it can be difficult to check host state with check_ping, which is the default detection method in Nagios. Moreover Windows machines have disabled ICMP protocol on firewall. I want to implement some method of auto-discovery in Nagios to save time to build monitoring system for the common operating systems and services. I will write the different post about auto-discovery. Now I want to focus only on the more reliable method of checking host state.

I wrote a simple python script that uses python nmap nmap, and implement it in CentOS7 environement. If You want to implement below script in other operating system, You have to install python NMAP module, with pip command.

Script takes as the argument the host name or ip address, and perform the nmap scan like:

$ namp -sn <hostname|ipaddress>

Script code is not pretty, there is also WARNING, and UNKNOWN state missing, but for purposes of this post is sufficient. Mayby in the future I will rewrite this check.

#! /usr/bin/env python
# -*- coding: utf8 -*-
import argparse
import nmap

parser = argparse.ArgumentParser(description="Process some options.")
parser.add_argument('-H', help="Hostname or IP address to check.")
args = parser.parse_args()

def main():
  nm = nmap.PortScanner()
  nm.scan(hosts=args.H, arguments='-sn')
  result = [nm[host] for host in nm.all_hosts()]

  if result == []:
    return 2
    print "NMAP HOST ALIVE: OK"
    return 0

if __name__ == "__main__":

The check script should be placed in $USER1$ path defined in Nagios resources, or in the different path, but then You have to define in Nagios the new resource containing full path to the script folder. Obviously, script should have the execution permissions. For example in CentOS 7:

$ sudo cp check_nmap /usr/lib64/nagios/plugins/check_nmap
$ sudo chmod 755 /usr/lib64/nagios/plugins/check_nmap

Before going futher, check the execution of the script:

/usr/lib64/nagios/plugins/check_nmap -H

. This should return either “NMAP HOST ALIVE: OK” or “NMAP HOST ALIVE: CRITICAL”.

Now, in Nagios configutation, it should be defined command which will use the my check script, and to this command should refer the check_command option in host object. The command definition:

define command {
    command_name check-host-alive-nmap
    command_line $USER1$/check_nmap -H $HOSTADDRESS$

and the host object definition:

define host {
    host_name monitored-hostname
    alias     monitored-hostname
    check_command check-host-alive-nmap

So, when all above is done, Nagios configuration should be checked with command

# nagios -v /etc/nagios/nagios.cfg

, and when the configuration is ok, then restart nagios

# systemctl restart nagios

. All hosts that appeared as “DOWN” state in Nagios web interface, should start changing state to “UP”.


Replace admin (or any user) password in dokuwiki… on Debian.

I recently asked google for a admin password restoration in dokuwiki. Obviously, I foud the answare, but incomplete for me.

In Debian Dokuwiki engine is stored at /usr/share/dokuwiki, but data files of application are stored at /var/lib/dokuwiki. In this directory, in subdirectory ./acl, You can find file users.auth.php. This file is something like secret passwd file in UNIX, it contains usernames and passwords hashes. Dokuwiki uses MD5 algorithm to encrypt passwords. To replace hash of forgoten passowrd with hash of new password, You have to run command:

# mkpasswd -m md5 secretpassword

and paste the output hash into the metioned file users.auth.php in place of old admin password hash. Save the file, reload dokuwiki page, and log in.

Redirecting fetching data stream to unarchive program

In some cases, one would like to have possibility to redirect the fetching data stream to unpack program instead storing it to the disk, then unpacking to specific folder and then removing fetched archives. This is, for example, when one would like to fetch the Freebsd base system and lib32 and unpack it into jail directory. This can be done with commands:

# fetch -q -o - ftp://ftp.pl.freebsd.org/pub/FreeBSD/releases/amd64/amd64/10.2-RELEASE/base.txz | tar -C /jails/vms/${jailname}/ -xzvf -
# fetch -q -o - ftp://ftp.pl.freebsd.org/pub/FreeBSD/releases/amd64/amd64/10.2-RELEASE/lib32.txz | tar -C /jails/vms/${jailname}/ -xzvf -


  1. fetch(1) man page
  2. tar(1) man page

Lid switch state change depending on whether the power cable is connected

In most desktop operating systems, there is an option to set what system should do (suspend, hibernate or nothing) when power cable is connected or disconnected and laptops lid open or closed. However, in FreeBSD when using Gnome3 desktop environment, this option is missing.

Probably it’s beacause latest versions of FreeBSD uses devd isntead hald to provide real-time device state discovery. I think that FreeBSD team will add this feature in the future, but in meantime You can try my brief instruction to add this missing option. Treat this as an idea or example of how the problem can be solved.

I wrote a simple c-shell script that uses sysctl command to change hw.acpi.lid_switch_state depending on notify returned by devd.

#!/bin/csh -f

set oidname = 'hw.acpi.lid_switch_state'
set logger = 'logger -t lid_switch_state -p daemon.notice'
set actual_suspend_state = `sysctl -n ${oidname}`

if ( `test $# -ne 1` ) then
	echo "Usage: $0 [0x00|0x01]"
	exit 1
        set state = ${1}

switch (${state})
case 0x01:
	set suspend_state = 'NONE'
	${logger} 'power cable connected'
case 0x00:
	set suspend_state = 'S3'
	${logger} 'power cable disconnected'
	echo "Usage: $0 [0x00|0x01]"
       	exit 1

test -n ${suspend_state}
if ( $? == 0 ) then
	sysctl ${oidname}=${suspend_state} >& /dev/null
	${logger} "${oidname}: ${actual_suspend_state} -> ${suspend_state}"
	${logger} 'suspend_state value is zero lenght'

exit 0

Copy above code to /usr/local/scripts/lid_switch_state file, make it executable, and test. In one terminal window run the script and in the second watch message log. Command:

# /usr/local/scripts/lid_switch_state 0x00

should return in message log:

Apr 24 12:03:38 sashagrey lid_switch_state: power cable disconnected
Apr 24 12:03:38 sashagrey lid_switch_state: hw.acpi.lid_switch_state: NONE -> S3

and commnad:

# /usr/local/scripts/lid_switch_state 0x01

should return in message log:

Apr 24 12:04:51 sashagrey lid_switch_state: power cable connected
Apr 24 12:04:51 sashagrey lid_switch_state: hw.acpi.lid_switch_state: S3 -> NONE

If all goes well, it’s time to configure devd. You need to add the following piece of configuration to /etc/devd.conf file:

notify 11 {
	match "system"		"ACPI";
	match "subsystem"	"ACAD";
	action "/usr/local/scripts/lid_switch_state $notify";

below this section:

# Switch power profiles when the AC line state changes.
notify 10 {
        match "system"          "ACPI";
        match "subsystem"       "ACAD";
        action "/etc/rc.d/power_profile $notify";

… and You need to restart devd:

# /etc/rc.d/devd restart

Now, try to unplug the power cord from Your laptop and check the message log and hw.acpi.lid_switch_state. When it's changed to 'NONE' then You can close the lid and laptops will not suspend, and when it's changed 'S3' then Your laptop will suspend when You close the lid.


  1. devd(8) man page
  2. thread on FreeBSD forum

CFEngine on FreeBSD 10.1 – ‘masterfiles’ hint and cf-serverd quick start

Recently I was looking for the system for automatic managing servers via local network. One of those that I tried is CFEngine. Instalation was easy, but I had a problem with self bootstraping of policy server. Masterfiles that comes with package are not suitable for version 3.6.5 that installs with sysutils/cfengine and bootstraping the policy server returns a lot of syntax errors in masterfiles.

Solution to the problem is to fetch proper masterfiles from CFEngine download page, and untar them to /var/cfengine/ directory… or to clone from github, and to install them according to instructions in README.md file.

Note that the masterfiles distributed by CFEngine isn’t realy required to bootstrap policy server. The minimum requiments is to create promises.cf in /var/cfengine/masterfiles with some example promise like ‘hello world’. Nonetheless it’s better to use CFEngines masterfiles, because they do some things during bootstraping, for instance they copy binary cf-* files from /usr/local/sbin/ to /var/cfengine/bin/ and link them back to default installation folder.

Ok, this was a ‘masterfiles’ hint… and now let’s start a cf-serverd.

  1. Install cfengine
    # pkg install cfengine
    # rehash
    # cf-promises -V
    CFEngine Core 3.6.5
  2. Post-instalation tasks
    # cd /var/cfengine
    # cf-key
    # cp /usr/local/sbin/cf-* bin/
    # fetch http://cfengine.package-repos.s3.amazonaws.com/tarballs/masterfiles-3.6.5.tar.gz
    # tar xzvf masterfiles-3.6.5.tar.gz
    # rm masterfiles-3.6.5.tar.gz
  3. Bootstraping policy server
    # /var/cfengine/bin/cf-agent -B
    2015-04-20T10:22:08+0000   notice: Bootstrap to '' completed successfully!

After that sockstat should return:

root     cf-serverd 76522 4  stream -> ??
root     cf-serverd 76522 5  stream -> ??
root     cf-serverd 76522 6  tcp4     *:*
root     cf-serverd 76522 7  dgram  -> /var/run/logpriv


  1. Community version installation guide
  2. CFEngine masterfiles on github
  3. thread about automation tools on FreeBSDs forum