Monday, February 23, 2015

Dropbear SSH vulnerabilities in stock Arduino Yun. How to update to OpenSSH remotely.

The Arduino Yún ships with Dropbear version 2011.54-2. There are multiple vulnerabilities with it and is not advised to be used. If your arduino is in a remote location and you want to update to OpenSSH, without losing remote access to the device, follow these steps.

  • Change the Dropbear port to an unused/free one on your box and restart Dropbear
    uci set dropbear.@dropbear[0].Port=2222
    uci commit dropbear
    /etc/init.d/dropbear restart
  • Reconnect to your Yun via SSH on the configured port above
  • Install the openssh-server
    opkg update
    opkg install openssh-server
  • Enable and start OpenSSH server. OpenSSH will listen now on port 22
    /etc/init.d/sshd enable
    /etc/init.d/sshd start
  • Reconnect to your yun via SSH on port 22
  • Now you can disable Dropbear
    /etc/init.d/dropbear disable
    /etc/init.d/dropbear stop
  • Install the openssh-sftp-server package to install support for the SFTP protocol which SSHFS uses
    opkg update
    opkg install openssh-sftp-server
Log into the Yun from the web address. Go to Configure > Advanced Configuration > System > Software and under Installed Packages and remove Dropbear.


22/tcp
High (CVSS: 7.1)
NVT: Dropbear SSH Server Use-after-free Vulnerability (OID: 1.3.6.1.4.1.25623.1.0.105113)
Summary
This host is installed with Dropbear SSH Server and is prone to a use-after-free vulnerability.
Vulnerability Detection Result
Vulnerability was detected according to the Vulnerability Detection Method.
Impact
This flaw allows remote authenticated users to execute arbitrary code and bypass command restrictions via multiple crafted command requests, related to channels concurrency.
Solution
Updates are available.
Affected Software/OS
Versions of Dropbear SSH Server 0.52 through 2011.54 are vulnerable.
Vulnerability Insight
A use-after-free vulnerability exists in Dropbear SSH Server 0.52 through 2011.54 when command restriction and public key authentication are enabled.
Vulnerability Detection Method
Check the version.
Details: Dropbear SSH Server Use-after-free Vulnerability (OID: 1.3.6.1.4.1.25623.1.0.105113)
Version used: $Revision: 809 $
References
CVE: CVE-2012-0920
BID: 52159
Other: http://www.securityfocus.com/bid/52159
https://matt.ucc.asn.au/dropbear/dropbear.html
22/tcp
Medium (CVSS: 5.0)
NVT: Dropbear SSH Server Multiple Security Vulnerabilities (OID: 1.3.6.1.4.1.25623.1.0.105114)
Summary
This host is installed with Dropbear SSH Server and is prone to multiple vulnerabilities.
Vulnerability Detection Result
Vulnerability was detected according to the Vulnerability Detection Method.
Impact
The flaws allows remote attackers to cause a denial of service or to discover valid usernames.
Solution
Updates are available.
Affected Software/OS
Versions prior to Dropbear SSH Server 2013.59 are vulnerable.
Vulnerability Insight
Multiple flaws are due to, - The buf_decompress function in packet.c in Dropbear SSH Server before 2013.59 allows remote attackers to cause a denial of service (memory consumption) via a compressed packet that has a large size when it is decompressed. - Dropbear SSH Server before 2013.59 generates error messages for a failed logon attempt with different time delays depending on whether the user account exists.
Vulnerability Detection Method
Check the version.
Details: Dropbear SSH Server Multiple Security Vulnerabilities (OID: 1.3.6.1.4.1.25623.1.0.105114)
Version used: $Revision: 809 $
References
CVE: CVE-2013-4421, CVE-2013-4434
BID: 62958, 62993
CERT: DFN-CERT-2013-1865 , DFN-CERT-2013-1772
Other: http://www.securityfocus.com/bid/62958
http://www.securityfocus.com/bid/62993
https://matt.ucc.asn.au/dropbear/dropbear.html
Source

Saturday, February 14, 2015

Graph Elertus and Google Nest in Nagios/check_mk

I recently got turned onto OMD/check_mk, which is probably the most user friendly tool for Nagios. It has a nearly complete Nagios stack, and more:

  • Nagios
  • Monitoring Plugins
  • check_nrpe
  • mrpe (a check_mk clone of nrpe)
  • Icinga
  • Shinken
  • NagVis
  • pnp4nagios
  • rrdtool/rrdcached
  • Check_MK
  • MK Livestatus
  • Multisite
  • Dokuwiki
  • Thruk
  • Mod-Gearman
  • check_logfiles
  • check_oracle_health
  • check_mysql_health
  • jmx4perl
  • check_webinject
  • check_multi


It has a very simple install and works with Debian/Ubuntu/Red Hat/SUSE. I recently deleted my Icinga build and went all in with this.

As well as regular servers, vmware and other things, I'm able to graph Google Nest's temperature, humidity and "leaf" status. You can set up alerts if it crosses a certain threshold and graph for as many years and you wish.

Check_MK with Google Nest

This is something that is sorely lacking in Google's current site since the graphing information is only stored for 10 days, and doesn't provide the granular level of detail I want.

Google Nest's graphs.

While I won't go into much detail on the install of OMD (it's pretty straightforward) and I won't go into much detail about the Nest integration (Great writeup on how to do that here)  I will be showing a way to get another product, the Elertus, monitored with check_mk as an example of what you can do with it.

The Elertus, is a wifi temperature, sound, light and water detector that runs on batteries. I wrote up an article about it on this blog before. It's a little trickier to monitor than the Nest due to the fact that there is no API, no way to scrape information off of the device since it only makes outbound connections.

Since my last post, last year, nothing has changed with it, nor has my opinion. It's still sluggish, no graphing, no "all clear" alerts... and a painfully slow (and useless) app to compliment. The plus side is that it's been almost a year and the batteries still work (AA batteries). I have had no connection issues from it as far as I can tell.

Other than an alerts tab, there's not much more to the Elertus website


When I had my Icinga server, I stood up another box that basically sniffed the traffic as it was in delivery to their servers.(which is still sent cleartext, btw). I cut out the bits and pieces I needed to get a basic graph up. Since I started this server, I decided to try a new method.

Using pfSense, I created a static DCHP connection for the device and an internal NAT rule to relay the traffic from the Elertus to my own web server running Apache with mod_dumpio turned on.

While this basically kills any communication with the Elertus servers, they won't be missed. With, my prior setup I was able to enjoy both my internally generated alerts, and their alerts, but I found my own to be a lot more useful.

The Elertus sends out a POST to their servers as a check in.

device_type=1&posix_time=1423925134&email_id=myemail@gmail.com&mac_address=000680000000&alert_flags=&light=14&temp=298&humidity=30&battery=70&motion=0&int_contact=1&ext_contact=1&ext_temp=-1&fw_ver=4.0.1_EL_v7&debug=rssi:46, ant:I, af:, pkt:l14_t298_h30_b70_m0_i1_e1_x-1_p1423925134, wdog:1, crtry:4, queue:3, ctime:w2285_d410_n130_s205_t3040, &


Apache has a virtualhost set up to receive those incoming PHP POST requests with dumpio enabled and the trace level set to 7.


<IfModule dumpio_module>
 DumpIOInput On
 DumpIOOutput On
 LogLevel dumpio:trace7
</IfModule

I also made sure to set custom logs, for just this module, as they will be filling up quick.

ErrorLog ${APACHE_LOG_DIR}/dumpio_module_error.log
CustomLog ${APACHE_LOG_DIR}/dumpio_module_access.log combined

The requests come in 2 at a time, sometimes more if there is any movement/light. So, to remedy the lack of a consistent time period to check the logs, I set up a cron job to pull the newest line in the log with POST data. This runs every 5 minutes.

#!/bin/bash

tac /var/log/apache2/dumpio_module_error.log | grep -m1 "email" > /tmp/tempout.txt

~/scripts/temp.sh
~/scripts/humidity.sh
~/scripts/battery.sh

The scripts within it, in turn, pull information out of the latest POST and updates the raw values to separate files. I've kept it modular so I can add more checks as I need them. I'm not too concerned about the water, movement and light alerts just yet.

The temperature is in Kelvin, so I converted it to Fahrenheit.

#!/bin/bash

tempk=$(cat /tmp/tempout.txt | awk -F "=" '/light/ {print $8}' | sed 's/&.*//')
tempb=$(awk "BEGIN {print "$tempk" - 273.15}")
temp=$(echo ""$tempb"*1.8+32" | bc)

echo "$temp" > ~/perfdata/temp
(I know, this can be cleaned up a lot)

The battery and humidity, are pretty much the same thing, with different names. It just requires another {print $X} position. The light, movement and water sensor can be added just as easily since they are only values of 0 or 1.

Now, unlike Nagios, getting devices to check_mk is a breeze. Making custom checks, with RRD graphing is just as easy. You add the custom checks to the host itself, not the monitoring server.

This web server is Ubuntu and it has check_mk installed. There is a folder that allows you to put custom scripts to make local checks, above what it already monitors.

/usr/lib/check_mk_agent/local/

The setup is pretty straightforward if you want monitoring with performance graphs.

#!/bin/bash

TEMP1=$(cat ~/perfdata/temp)

echo '<<<local>>>'
echo "P Temperature temp="$TEMP1";35:89;32:91;0;110"

Any script that is in this folder runs when check_mk runs and grabs the other server global readings.

The output appends itself to the bottom of the generated file, as a local check like this:

<<<local>>>
P Temperature temp=73.13;35:89;32:91;0;110

It tells check_mk that it's available to be added as a service, with graphing

  • P

The name of the service

  • Temperature

The variable name

  • temp=

The output from $TEMP1, the WARN min:max, the CRIT min:max and UNKNOWN lower;upper (for graphing reasons and not required). Notice that the colons and semicolons are there for a reason.

When you scan the host in WATO on the main check_mk site, the host should have basic performance graphs and automatically added to the notification ruleset the host was part of.



While it's not an ideal solution, it gets the job done. I wish Elertus would open their API up to developers, the would probably sell a lot more units if they did.

I recommend anyone who's given up with Nagios/Icinga to give OMD a try. It's pretty good. The documentation is a little poor (and mostly in German), but with Google Translate and some coffee, you can get through it if you've ever set up any other monitoring before.

Friday, February 13, 2015

How to enable check_mk for pfSense 2.2

The official check_mk plugin (v0.1.1) for pfSense 2.2 does not work. Here are the steps to configure it manually.

Edit the rc.conf (notice, this location is different from the official FreeBSD)

vi /etc/rc.conf.local
 inetd_enable="YES"
 inetd_flags="-wW"

At the bottom of /etc/services, add the check_mk port definition.
vi /etc/services
 check_mk        6556/tcp   #check_mk agent


Add the tcp wrappers for check_mk
vi /etc/inetd.conf
 check_mk        stream  tcp     nowait          root    /usr/local/bin/check_mk_agent check_mk


Add the following and replace x.x.x.x with your ip
vi /etc/hosts.allow
 # Allow nagios server to access us
 check_mk_agent : x.x.x.x : allow
 check_mk_agent : ALL : deny


Adding custom inetd scripts to allow service control
vi /etc/rc.d/inetd
 #!/bin/sh
 #
 # $FreeBSD$
 #
 
 # PROVIDE: inetd
 # REQUIRE: DAEMON LOGIN cleanvar
 # KEYWORD: shutdown
 
 . /etc/rc.subr
 
 name="inetd"
 rcvar="inetd_enable"
 command="/usr/sbin/${name}"
 pidfile="/var/run/${name}.pid"
 required_files="/etc/${name}.conf"
 extra_commands="reload"
 
 load_rc_config $name
 run_rc_command "$1"

Make it executable
chmod +x /etc/rc.d/inetd


This is the only check_mk agent I could find that would work with pfSense 2.2. Replace all the text, or create a new file (if creating a new file, make sure to chmod +x it)
vi /usr/local/bin/check_mk_agent
#!/bin/sh
# +------------------------------------------------------------------+
# |             ____ _               _        __  __ _  __           |
# |            / ___| |__   ___  ___| | __   |  \/  | |/ /           |
# |           | |   | '_ \ / _ \/ __| |/ /   | |\/| | ' /            |
# |           | |___| | | |  __/ (__|   <    | |  | | . \            |
# |            \____|_| |_|\___|\___|_|\_\___|_|  |_|_|\_\           |
# |                                                                  |
# | Copyright Mathias Kettner 2014             mk@mathias-kettner.de |
# +------------------------------------------------------------------+
#
# This file is part of Check_MK.
# The official homepage is at http://mathias-kettner.de/check_mk.
#
# check_mk is free software;  you can redistribute it and/or modify it
# under the  terms of the  GNU General Public License  as published by
# the Free Software Foundation in version 2.  check_mk is  distributed
# in the hope that it will be useful, but WITHOUT ANY WARRANTY;  with-
# out even the implied warranty of  MERCHANTABILITY  or  FITNESS FOR A
# PARTICULAR PURPOSE. See the  GNU General Public License for more de-
# ails.  You should have  received  a copy of the  GNU  General Public
# License along with GNU Make; see the file  COPYING.  If  not,  write
# to the Free Software Foundation, Inc., 51 Franklin St,  Fifth Floor,
# Boston, MA 02110-1301 USA.

# Author: Lars Michelsen <lm@mathias-kettner.de>
#         Florian Heigl <florian.heigl@gmail.com>
#           (Added sections: df mount mem netctr ipmitool)

# NOTE: This agent has beed adapted from the Check_MK linux agent.
#       The most sections are commented out at the moment because
#       they have not been ported yet. We will try to adapt most
#       sections to print out the same output as the linux agent so
#       that the current checks can be used.

# This might be a good source as description of sysctl output:
# http://people.freebsd.org/~hmp/utilities/satbl/_sysctl.html

# Remove locale settings to eliminate localized outputs where possible
export LC_ALL=C
unset LANG

export MK_LIBDIR="/usr/lib/check_mk_agent"
export MK_CONFDIR="/etc/check_mk"
export MK_TMPDIR="/var/run/check_mk"


# Make sure, locally installed binaries are found
PATH=$PATH:/usr/local/bin

# All executables in PLUGINSDIR will simply be executed and their
# ouput appended to the output of the agent. Plugins define their own
# sections and must output headers with '<<<' and '>>>'
PLUGINSDIR=$MK_LIBDIR/plugins

# All executables in LOCALDIR will by executabled and their
# output inserted into the section <<<local>>>. Please refer
# to online documentation for details.
LOCALDIR=$MK_LIBDIR/local


# close standard input (for security reasons) and stderr
#if [ "$1" = -d ]
#then
#    set -xv
#else
#    exec </dev/null 2>/dev/null
#fi

# Runs a command asynchronous by use of a cache file

echo '<<<check_mk>>>'
echo Version: 1.2.7i1
echo AgentOS: freebsd



osver="$(uname -r)"
is_jailed="$(sysctl -n security.jail.jailed)"


# Partitionen (-P verhindert Zeilenumbruch bei langen Mountpunkten)
# Achtung: NFS-Mounts werden grundsaetzlich ausgeblendet, um
# Haenger zu vermeiden. Diese sollten ohnehin besser auf dem
# Server, als auf dem Client ueberwacht werden.

echo '<<<df>>>'
# no special zfs handling so far, the ZFS.pools plugin has been tested to
# work on FreeBSD
if df -T > /dev/null ; then
    df -kTP -t ufs | egrep -v '(Filesystem|devfs|procfs|fdescfs|basejail)'
else
    df -kP -t ufs | egrep -v '(Filesystem|devfs|procfs|fdescfs|basejail)' | awk '{ print $1,"ufs",$2,$3,$4,$5,$6 }'
fi

# Check NFS mounts by accessing them with stat -f (System
# call statfs()). If this lasts more then 2 seconds we
# consider it as hanging. We need waitmax.
#if type waitmax >/dev/null
#then
#    STAT_VERSION=$(stat --version | head -1 | cut -d" " -f4)
#    STAT_BROKE="5.3.0"
#
#    echo '<<<nfsmounts>>>'
#    sed -n '/ nfs /s/[^ ]* \([^ ]*\) .*/\1/p' < /proc/mounts |
#        while read MP
#  do
#   if [ $STAT_VERSION != $STAT_BROKE ]; then
#      waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" || \
#    echo "$MP hanging 0 0 0 0"
#   else
#      waitmax -s 9 2 stat -f -c "$MP ok %b %f %a %s" "$MP" && \
#      printf '\n'|| echo "$MP hanging 0 0 0 0"
#   fi
#  done
#fi

# Check mount options.
# FreeBSD doesn't do remount-ro on errors, but the users might consider
# security related mount options more important.
echo '<<<mounts>>>'
mount -p -t ufs

# processes including username, without kernel processes
echo '<<<ps>>>'
COLUMNS=10000
if [ is_jailed = 0 ]; then
    ps ax -o state,user,vsz,rss,pcpu,command | sed -e 1d  -e '/\([^ ]*J\) */d' -e 's/*\([^ ]*\) *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) */(\2,\3,\4,\5) /'
else
    ps ax -o user,vsz,rss,pcpu,command | sed -e 1d -e 's/ *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) *\([^ ]*\) */(\1,\2,\3,\4) /'
fi


# Produce compatible load/cpu output to linux agent. Not so easy here.
echo '<<<cpu>>>'
echo `sysctl -n vm.loadavg | tr -d '{}'` `top -b -n 1 | grep -E '^[0-9]+ processes' | awk '{print $3"/"$1}'` `sysctl -n kern.lastpid` `sysctl -n hw.ncpu`

# Calculate the uptime in seconds since epoch compatible to /proc/uptime in linux
echo '<<<uptime>>>'
  up_seconds=$(( `date +%s` - `sysctl -n kern.boottime  | cut -f1 -d\, | awk '{print $4}'`))
idle_seconds=$(ps axw | grep idle | grep -v grep | awk '{print $4}' | cut -f1 -d\: )

# second value can be grabbed from "idle" process cpu time / num_cores
echo "$idle_seconds $up_seconds"


# Platten- und RAID-Status von LSI-Controlleren, falls vorhanden
#if which cfggen > /dev/null ; then
#   echo '<<<lsi>>>'
#   cfggen 0 DISPLAY | egrep '(Target ID|State|Volume ID|Status of volume)[[:space:]]*:' | sed -e 's/ *//g' -e 's/:/ /'
#fi


# Multipathing is supported in FreeBSD by now
# http://www.mywushublog.com/2010/06/freebsd-and-multipath/
if kldstat -v | grep g_multipath > /dev/null ; then
    echo '<<<freebsd_multipath>>>'
    gmultipath status | grep -v ^Name
fi


# Soft-RAID
echo '<<<freebsd_geom_mirrors>>>'
gmirror status | grep -v ^Name

# Performancecounter Kernel
echo "<<<kernel>>>"
date +%s
forks=`sysctl -n vm.stats.vm.v_forks`
vforks=`sysctl -n vm.stats.vm.v_vforks`
rforks=`sysctl -n vm.stats.vm.v_rforks`
kthreads=`sysctl -n vm.stats.vm.v_kthreads`
echo "cpu" `sysctl -n kern.cp_time | awk ' { print $1" "$2" "$3" "$5" "$4 } '`
echo "ctxt" `sysctl -n vm.stats.sys.v_swtch`
echo "processes" `expr $forks + $vforks + $rforks + $kthreads`

# Network device statistics (Packets, Collisions, etc)
# only the "Link/Num" interface has all counters.
echo '<<<lnx_if:sep(58)>>>'
date +%s
if [ "$(echo $osver | cut -f1 -d\. )" -gt "8" ]; then
    netstat -inb | egrep -v '(^Name|plip|enc|pfsync|pflog|ovpns)' | grep Link | awk '{print"\t"$1":\t"$8"\t"$5"\t"$6"\t"$7"\t0\t0\t0\t0\t"$11"\t"$9"\t"$10"\t0\t0\t0\t0\t0"}'
else
    # pad output for freebsd 7 and before
    netstat -inb | egrep -v '(^Name|lo|plip)' | grep Link | awk '{print $1" "$7" "$5" "$6" 0 0 0 0 0 "$10" "$8" "$9" 0 0 "$11" 0 0"}'
fi


# State of LSI MegaRAID controller via MegaCli.
# To install: pkg install megacli
if which MegaCli >/dev/null ; then
    echo '<<<megaraid_pdisks>>>'
    MegaCli -PDList -aALL -NoLog < /dev/null | egrep 'Enclosure|Raw Size|Slot Number|Device Id|Firmware state|Inquiry'
    echo '<<<megaraid_ldisks>>>'
    MegaCli -LDInfo -Lall -aALL -NoLog < /dev/null | egrep 'Size|State|Number|Adapter|Virtual'
    echo '<<<megaraid_bbu>>>'
    MegaCli -AdpBbuCmd -GetBbuStatus -aALL -NoLog < /dev/null | grep -v Exit
fi


# OpenVPN Clients. 
# Correct log location unknown, sed call might also be broken
if [ -e /var/log/openvpn/openvpn-status.log ] ; then
    echo '<<<openvpn_clients:sep(44)>>>'
    sed -n -e '/CLIENT LIST/,/ROUTING TABLE/p' < /var/log/openvpn/openvpn-status.log  | sed -e 1,3d -e '$d' 
fi


if which ntpq > /dev/null 2>&1 ; then
   echo '<<<ntp>>>'
   # remote heading, make first column space separated
   ntpq -np | sed -e 1,2d -e 's/^\(.\)/\1 /' -e 's/^ /%/'
fi


# Checks for cups monitoring
#if which lpstat > /dev/null 2>&1; then
#  echo '<<<cups_queues>>>'
#  lpstat -p
#  echo '---'
#  for i in $(lpstat -p | grep -E "^(printer|Drucker)" | awk '{print $2}' | grep -v "@"); do
#    lpstat -o "$i"
#  done
#fi

# Heartbeat monitoring
#if which cl_status > /dev/null 2>&1; then
#  # Different handling for heartbeat clusters with and without CRM
#  # for the resource state
#  if [ -S /var/run/heartbeat/crm/cib_ro ]; then
#    echo '<<<heartbeat_crm>>>'
#    crm_mon -1 -r | grep -v ^$ | sed 's/^\s/_/g'
#  else
#    echo '<<<heartbeat_rscstatus>>>'
#    cl_status rscstatus
#  fi
#
#  echo '<<<heartbeat_nodes>>>'
#  for NODE in $(cl_status listnodes); do
#    if [ $NODE != $HOSTNAME ]; then
#      STATUS=$(cl_status nodestatus $NODE)
#      echo -n "$NODE $STATUS"
#      for LINK in $(cl_status listhblinks $NODE 2>/dev/null); do
#        echo -n " $LINK $(cl_status hblinkstatus $NODE $LINK)"
#      done
#      echo
#    fi
#  done
#fi

# Number of TCP connections in the various states
echo '<<<tcp_conn_stats>>>'
netstat -na | awk ' /^tcp/ { c[$6]++; } END { for (x in c) { print x, c[x]; } }'


# Postfix mailqueue monitoring
#
# Only handle mailq when postfix user is present. The mailq command is also
# available when postfix is not installed. But it produces different outputs
# which are not handled by the check at the moment. So try to filter out the
# systems not using postfix by searching for the postfix user.
#
# Cannot take the whole outout. This could produce several MB of agent output
# on blocking queues.
# Only handle the last 6 lines (includes the summary line at the bottom and
# the last message in the queue. The last message is not used at the moment
# but it could be used to get the timestamp of the last message.
#if which mailq >/dev/null 2>&1 && getent passwd postfix >/dev/null 2>&1; then
#  echo '<<<postfix_mailq>>>'
#  mailq | tail -n 6
#fi

#Check status of qmail mailqueue
#if type qmail-qstat >/dev/null
#then
#   echo "<<<qmail_stats>>>"
#   qmail-qstat
#fi

# check zpool status
#if [ -x /sbin/zpool ]; then
#   echo "<<<zpool_status>>>"
#   /sbin/zpool status -x | grep -v "errors: No known data errors"
#fi

# Memory Usage
# currently we'll need sysutils/muse for this.
if [ -x /usr/local/bin/muse ]
then
echo '<<<mem>>>'
# yes, i don't know sed well.
muse -k 2>/dev/null | sed 's/Total/MemTotal/' | sed 's/Free/MemFree/'
swapinfo -k 1K | tail -n 1 | awk '{ print "SwapTotal: "$2" kB\nSwapFree: "$4" kB" }'
fi



# Fileinfo-Check: put patterns for files into /etc/check_mk/fileinfo.cfg
if [ -r "$MK_CONFDIR/fileinfo.cfg" ] ; then
    echo '<<<fileinfo:sep(124)>>>'
    date +%s
    stat -f "%N|%z|%m" $(cat "$MK_CONFDIR/fileinfo.cfg")
fi

Restart inetd
service inetd restart

You should be able to telnet into it it from the IP you specified in /etc/hosts.allow
telnet x.x.x.x 6556

If you cannot reach that port, allow the port in the firewall rules.

Go to Firewall > Rules

Click the LAN tab.

Create new rule.

Set the rule to pass traffic, on interface LAN, with the source IP of your check_mk server and to port 6556.


Give it a description, then save.

You should be able to pick up checks from OMD/check_mk.

Thursday, November 27, 2014

Hot to set up pfSense software raid in 2.1.5-RELEASE (amd64)

Here is an example of how to create a software RAID1 in pfSense 2.1.5.

I created a lab in Virtualbox with two 8GB thin provisioned disks and installed pfSense. "pfsense.vdi "and "pfsense2.vdi"





During the install, I chose "1," to boot with the default settings.
The initialization screen defaults to the LiveCD installer. Skip that and press "I" to install directly.
Accept the default settings for the Video and Keymap
Then we want to choose "Setup GEOM Mirror"
Confirm the selection
Now we choose the Primary disk and press enter
Choose the Mirror disk and press enter.
Verify no errors exist. Press Enter.
Choose the Custom Install
Then we choose the mirror/pfSenseMirror we just created.
Format the disk
Use the default disk geometry (just tab to "Use this Geometry")
Format the mirror/pfSenseMirror
Choose Partition Disk
Accept and Create the default settings
Choose "Yes, partition mirror/pfSenseMirror"
Press "OK"
Now, we want to uncheck "Install Bootblock" and make sure "Packet mode" is unchecked as well.
Accept and install
Press "OK"
Choose the default partition slice.
Confirm "OK", then "OK" again
Choose the defaults for the subpartitions (tab to "Accept and Create")
Once the install writes to the mirror, choose "Symmetric multiprocessing kernel", unless you are creating a headless RS232 serial-only interface.
Eject the virtual CD and Reboot.
Once the system reboots, configure pfSense like normal.


We now have a RAID1 mirror of the disks. We can now test booting by removing either of the virtual disks and booting pfSense. In the lab, I've removed the primary disk "pfsense.vdi" and it's booting off the mirror "pfsense2.vdi"





pfSense lacks notification (by default) on a degraded RAID mirror. You can manually check the status of the disk health by going into the console and typing "gmirror status". You can also see the status of the mirror when I shut down the VM (at about the 58 second mark):
GEOM_MIRROR: Device pfSenseMirror destroyed.


To mimic rebuilding a disk, in my lab I created a new volume called "pfsense3.vdi" and made it a blank 8GB, thin provisioned disk to match what I was replacing.





To rebuild the disk, I first checked the status of the disks, "gmirror status"
I destroyed the mirror "gmirror forget pfSenseMirror"
Now, "gmirror status" shows COMPLETE (with just one disk, ad0)
I looked at which disks were present, "atacontrol list" and saw ad1 available and not part of the mirror. This is the new blank disk we want to become part of the mirror.
Inserted it into the mirror with "gmirror insert pfSenseMirror /dev/ad1"
It will start rebuilding. We can check the sync status with "gmirror status" again.
Once complete, you will get the message:
GEOM_MIRROR: Device pfSenseMirror: rebuilding provider ad1 finished.
This will take some time in a normal install. This VM was installed over a SSD on a blank install. Expect some time for it to synchronize.
"gmirror status" should now show us both ado and ad1 as ACTIVE

Friday, October 31, 2014

Gap for Gimp (gif creator) Windows Installer mirror

Mirror for Gap for Gimp version 2 (Gimp-GAP-2.6.0-Setup2).

MD5 938d9da31c2e9c34de1612e80d5b9a0c
SHA1 4422fb72a27ff73261e3b7ec1ec5c199cd1913ac
SHA256 6c7287cef151dfed96cd8a86a5d097fa40f691c28dcb071b127ee384620ea3fe



Download link  (Current as of 10/31/2014)




Source: http://photocomix-resources.deviantart.com/art/GAP-2-6-for-Gimp-2-6-Windows-135464357

Saturday, October 11, 2014

Re-enable non Chrome Store extensions in Stable/Beta builds - Chrome version 38.0.2125.101 m

I wrote an extension a while back for Chrome to help me export bulk DNS requests in xml format so I could import them into my firewall easier.

Since Google blocked apps/extensions from being installed from outside the Chrome store, I wasn't able to run the app I wrote without becoming an official developer... So I didn't bother.

I finally found a (sane) solution to the problem. This was posted on the Google Product Forums. Thought I'd share the steps with a bit more detail:

  1. Download the Chrome group policy templates: http://dl.google.com/dl/edgedl/chrome/policy/policy_templates.zip (Mirror located here: https://drive.google.com/file/d/0B_Kat9gPjQAVdXV2Q3BEOVpja28 MD5 Hash7eac305720bb2f70e9e3940205b45796)
  2. Extract the files. Copy (zip)\policy_templates\windows\admx\chrome.admx to C:\Windows\PolicyDefinitions\
  3. Copy (zip)\policy_templates\windows\admx\en-US\chrome.adml (or your language/region) to C:\Windows\PolicyDefinitions\en-US
  4. Open Chrome and go to Options > Tools > Extensions (Or simply chrome://extensions/) and at the top, check Developer mode (if not already checked)
  5. Scroll for the extension you wish to re-enable. You should be able to double click on the ID to select it, then copy. 
  6. If you've already uninstalled the extension, you can drag the .crx files back into this extension page to reinstall. You will not be able to enable it, yet, but this will give us the ID to allow it to enable it.
  7. Run gpedit.msc from the start menu or command line (Or, if running a Home Edition version of Windows, MMC and add the Group Policy Editor snap-in)
  8. Expand User Configuration > Administrative Templates > Google > Google Chrome (not the Google Chrome with "Default Settings" in the name) > Extensions
  9. Edit the Configure extension installation whitelist on the right pane.
  10. Change the options from Not Configured to Enabled.
  11. Under the Options, click the Show button and paste in the Extension ID(s) you want to re-enable. 
  12. Ok and close out of the group policy editor. Close out of Chrome completely (check the task manager to be sure).
  13. Once you launch Chrome, navigate back to your extensions. You will not have the ability to re-enable your app/extension. 

Tuesday, September 30, 2014

How to get Ubuntu 14.04 security update notifications sent to your inbox using Gmail.

We can set up a cron job to email security updates using Gmail over SSMTP (yes, the same app we used before to send email notifications for someone pressing the doorbell in a previous project). SSMTP, not to be confused with SMTP, is easy enough to use and set up, but it's not the most secure as your password is in plain text. I recommend signing up for and using a throwaway/junk Gmail account for this.

Prerequisites:

  • Ubuntu 14.04 LTS Server
  • A spare gmail account that you don't care about



SU into root and install ssmtp:

sudo su - 
apt-get install ssmtp

Move the default config to a backup:

mv /etc/ssmtp/ssmtp.conf /etc/ssmtp/ssmtp.bkp

We are going to create a new conf file in its place:

vi /etc/ssmtp/ssmtp.conf

(for new vi users press "i" here to enter insert mode... or just use nano or whatever you prefer)

# The user that gets all the mails (UID < 1000, usually the admin)
root=youremail@gmail.com

# The mail server (where the mail is sent to), both port 465 or 587 should be acceptable
# See also http://mail.google.com/support/bin/answer.py?answer=78799
mailhub=smtp.gmail.com:587

# The address where the mail appears to come from for user authentication.
rewriteDomain=gmail.com

# The full hostname
hostname=yourhostname

# Use SSL/TLS before starting negotiation
UseTLS=Yes
UseSTARTTLS=Yes

# Username/Password
AuthUser=youremail@gmail.com
AuthPass=yourpassword

# Email 'From header's can override the default domain?
FromLineOverride=yes

(again, for new vi users save the file by pressing Escape and typing :wq)

Update the security for your conf file since your password is in plain text:
chmod 640 /etc/ssmtp/ssmtp.conf

cd back to root:
cd /root
(or just simply type cd to get home)

Verify your SSMTP is working and has proper permissions:

echo "Hello world" >  test.txt
cat test.txt | ssmtp myemail@gmail.com



Once you receive an email (make sure to check the sent folder on the sending side and the spam folder on the receiving end) now we can check for security updates using my previous post:

/usr/lib/update-notifier/apt-check --human-readable

To break this down, look at what it's doing. /usr/lib/update-notifier/apt-check by itself returns a very unhelpful 0;0. The --human-readable directive adds some verbosity:

0 packages can be updated.
0 updates are security updates.

...but we don't want an email for every chrome/firefox/general bug update. We want to focus in on the security patches. To do this, we can use grep to give us just the security update totals. The script so far is, check for updates, make it easy to read, pipe it to "grep" with a "-i" (case insensitive) with the term "security".

/usr/lib/update-notifier/apt-check --human-readable | grep -i security

and the output

0 updates are security updates.

While the output is better, now we see  0 updates are security updates, we can distill it even further with awk.

/usr/lib/update-notifier/apt-check --human-readable | grep -i security  | awk '{ print $1 }'

So, that line is reduced to the first set of numbers. Literally the number itself

0

We are passing the argument to awk, $1, which means the first set of numbers in that row. We can now put together a very simple and crude cron job (there are many ways to skin a cat, this is for demonstrative purposes... also because I'm not the worlds best scripter)




Let's start off by creating the script.

vi patch_notify.sh

Now, we can paste in the following:

#!/bin/bash
CURDATE=`date`
TTIME=`date +"%r"`

SECUPD=$( /usr/lib/update-notifier/apt-check --human-readable | grep -i security  | awk '{ print $1 }' )


if [ $SECUPD -eq "0" ]
then
       echo "There are $SECUPD updates."

else

       if [ $SECUPD -eq "1" ]
       then

       ssmtp your-send-to-email@gmail.com <<-EOF
       From: username <youremail@gmail.com>
       To: your-send-to-email@gmail.com
       Subject: Weekly security updates - $CURDATE

       $SECUPD security update is waiting for your installation as of $TTIME.

       EOF

       else

               if [ $SECUPD -gt "1" ]
               then

               ssmtp your-send-to-email@gmail.com <<-EOF
               From: username <youremail@gmail.com>
               To: your-send-to-email@gmail.com
               Subject: Weekly security updates - $CURDATE

               $SECUPD security updates are waiting for your installation as of $TTIME.

               EOF
               fi
       fi
fi


To break this script down, ultimately it's in 3 parts. Is the patch total 0? No emails. Is the patch level 1, yes, email (but with proper syntax; singular vs plural, because I'm anal like that). Is there more than one patch? Yes, send an email.

To get into the guts of it

We are telling it to use (the recently very infamous) bash shell.
#!/bin/bash
Now we are setting the variables for the date and time
CURDATE=`date`
TTIME=`date +"%r"`
You can run date in the command line and see what's copied into the variable, now CURDATE. Also date +"%r" which shows just time time 06:58:46 PM, that is copied as TTIME.

We are also setting the SECUPD variable. This takes the output from awk output and associates itself with SECUPD.

SECUPD=$( /usr/lib/update-notifier/apt-check --human-readable | grep -i security  | awk '{ print $1 }' ) The only difference is the formatting around the command.

The -eq command operator is "equal". So we are saying if the value of $SECUPD is exactly 0,

if [ $SECUPD -eq "0" ]
echo "There are $SECUPD updates."

We display "There are 0 updates." and an email is not generated.


If there is 1 security update for us:

if [ $SECUPD -eq "1" ]

We send an email to our main email address. The subject will show "Weekly security updates" and the current date. The body will show

$SECUPD security update is waiting for your installation as of $TTIME.
or
1 security update is waiting for your installation as of 07:01:32 PM.

The EOF's encapsulate what will be sent to SSMTP


If there is more than one update available:

if [ $SECUPD -gt "1" ]
If the updates are greater than one, (not greater or equal, but literally 2+) then we send this:

$SECUPD security updates are waiting for your installation as of $TTIME.
or
12 security updates are waiting for your installation as of 07:03:13 PM.

We now need to make our script executable

chmod +x patch_notify.sh

You should just be able to run

./patch_notify.sh

If you have no critical patches, you should see:

There are 0 updates.



This script is butchered pretty bad, but it's a good beginner script. You can change the context, remove the dates, the layout, the wording. You can also simplify it to send you an email only if it's above a dozen updates. Or even send you an email, regardless of how many security patches, even 0, at a set interval with cron.

So, now to automate the script (this was written for Debian based systems, specifically Ubuntu 14.04 LTS Server... but should be pretty close to the same across all platforms)

We are going to do a weekly scan, on a Saturday at noon.

Edit the crontab:

crontab -e

To edit the cron job (also crontab -l to list jobs you already have)

0 12 * * 6 /root/patch_notify.sh

This tells the cron daemon to run your script in its relative path /root/ every Saturday at noon.

Feel free to bug me and ask questions (or correct me... I know I probably screwed up somewhere... but it's working for me so far)