Akismet firewall problems (and how to fix them)

Download PDF

Recently I was confused when I spotted several spam comments on my blog. After logging into the WordPress administration page I saw the reason for this issue:

Akismet has detected a problem. A server or network problem is preventing Akismet from working correctly.

Oops – where did that come from?

It seems like my hoster All-Inkl has updated its PHP configuration recently. The Akismet plugin needs access to particular external servers to filter Spam comments. I discovered  that the following PHP settings need to be set that this access works:

magic_quotes_gpc = 1
magic_quotes_runtime = 0
allow_url_fopen = On

If you’re using a managed webserver and don’t have the possibility to configure it entirely (like me) you can also set these parameters in the .htaccess file:

php_flag magic_quotes_gpc on
php_flag magic_quotes_runtime 0
php_flag allow_url_fopen On

After reloading the administration page everything was working like a charm again. :)

Download PDF

Short tip: YUM error: “xz compression not available”

Download PDF

While importing the recent EPEL7 YUM repositories I stumbled upon the following error:

# /usr/bin/spacewalk-repo-sync --channel epel-el7-x86_64 \
> --url http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/ \
> --type yum -c epel-el7-x86_64
Repo URL: http://dl.fedoraproject.org/pub/epel/beta/7/x86_64/
ERROR: xz compression not available

The solution was pretty easy – the following Python library was missing:

# yum install pyliblzma

The next import was working like a charm. :)

Download PDF

Short tip: RHEL7 channels not available to Red Hat Satellite 5.6

Download PDF

It is possible that RHEL7 channels aren’t available to Red Hat Satellite even though you’re having a valid Red Hat Enterprise Linux subscription.

The appropriate software channel (rhel-x86_64-server-7) is not part of the list of available channels – it also cannot be downloaded manually:

# satellite-sync --list-channels|grep "server-*7"
# satellite-sync -c rhel-x86_64-server-7
16:17:48 Red Hat Satellite - live synchronization
...
16:23:23 ERROR: these channels either do not exist or are not available:
16:23:23        rhel-x86_64-server-7
16:23:23        (to see a list of channel labels: /usr/bin/satellite-sync --list-channels)
Satellite-Zertifikat herunterladen

Satellite-Zertifikat herunterladen

The solution was pretty simple in my case – I fixed it by re-generating the Satellite certificate and re-activating the server at Red Hat Network. To do this you need so select your appropriate Satellite system in RHN beneath the following folder structure:

Subscriptions > Subscription management > Satellite

Select the button “Download Satellite Certificate” in the following form. The downloaded XML file is transfered to the server and the is system re-activated using rhn-satellite-activate afterwards. In my case all needed software channels were available immediately:

# scp HOSTNAME.xml root@IP:/tmp
# rhn-satellite-activate --rhn-cert=/tmp/HOSTNAME.xml
# satellite-sync --list-channels|grep "server-*7"
...
16:28:01       . rhel-x86_64-server-7                     4459
16:28:03    rhel-x86_64-server-7:
16:28:03       . rhel-x86_64-server-7-debuginfo           1949
16:28:03       . rhel-x86_64-server-7-thirdparty-oracle-java   16
16:28:03       . rhel-x86_64-server-7-thirdparty-oracle-java-beta    0
16:28:03       . rhn-tools-rhel-x86_64-server-7             20
16:28:03       . rhn-tools-rhel-x86_64-server-7-debuginfo    0
...

I’d like to thank Markus Koch from Red Hat GmbH who was able to help me with this issue quickly. :)

Download PDF

Intel 82579LM Gigabit NIC under VMware ESXi

Download PDF

While bringing my self-made ESXi hosts into service I saw that only one of two network cards was detected. The reasons for this issue was that VMware ESXi offers no driver for the second network card Intel 82579LM.

What a pity. Fortunately I stumbled upon a blog which offers an adequate software packages which was created from the Intel source codes: [click me!]

The software packages just needs to be transfered to a datastore the ESXi host has access to before it can be installed. You also need to make sure that installing VIB packages that are not signed by VMware is allowed (community-supported):

# esxcli software acceptance set --level=CommunitySupported
# esxcli software vib install -v /vmfs/volumes/VOLUME/net-e1000e-2.3.2.x86_64.vib

After a system reboot two network cards can be used:

Zwei erkannte NICs

Zwei erkannte NICs

:)

Download PDF

A new ESXi home server

Download PDF

At about two years ago I replaced a lot of my hardware with more power-saving alternatives. My self-made NAS and hypervisor wer replaces by two HP ProLiant MicroServer G7 servers (N36L and N40L) – for a long time I was very happy with them.

In the last months the amount of VMs increased and now the CPU and memory ressources are exhausted. A new, more powerful VMware server was needed.

Actual state

Ist-Zustand

Ist-Zustand

Currently I have two dedicated server for NAS and VMs that are connected via IPCop to different networks.

Internal as well as public VMs are served using a DMZ. Using a dedicated network interface a connection to the IPCop system is made. This is not a good solution because there is a theoretical risk. An attacker could be able to access the internal network after breaking into a compromised DMZ VM and the hypervisor. To avoid this dedicated hypervisors or additional products like VMware vShield are used in datacenters. Using this also at home would have been expensive becuase additional hardware and software licenses are needed. I ran the (very) abstract risk in this case. ;-)

HP ProLiant MicroServer Gen8

At first sight I was thinking about buying the freshly released MicroServer. This servers offers an even smaller case and a switchable CPU. There are a lot of reports in the internet that are showing how to repleace the standard Intel Celeron or Pentium cpu with an Intel Xeon:

This really got me as I gained good experiences with the predecessor. Unfortunately my anticipation was blurred – the new server also only supports up to 16 GB RAM. The integrated Intel chipset C204 supports also 32 GB RAM so it seems that HP placed a block in the BIOS. There are also reports in in the internet that are showing that it’s not possible to use 32 GB RAM – independently 16 GB memory modules (the MicroServer has only two memory slots) are quite expensive. Because the main reason for my redesign was the memory limitation the little HP server was bowed out. Another disadvantage of the MicroServer was the quite heavy price of at about 500 Euros.

Intel NUC

Intel also offers very interesting hardware with the fourth generation of their embedded “Next Unit of Computing” systems.

Those Single Board Computers are coming with a Celeron, i3 or i5 CPU with up to 1,7 Ghz clock rate and 4 threads. Using DDR3 SODIMM sockets it possible to serve up to 16 GB RAM. The devices can also be bought as a kit which includes a small case – the most recent case can also hold one 2.5″ drive. An internal SSD (e.g. for the ESXi hypervisor) can be connected using mSATA. A i3 NUC with 16 GB RAM and a 4 GB SSD costs at about 350 Euros.

For a short time I was thinking about using such a device for my DMZ and test systems – but this seemed improvident to me because of multiple reasons:

  • Layer 3 switch needed (because of VLAN tagging for the test and DMZ network) – expense: at about 300 Euros (Cisco SG300 series)
  • another device that needs to be running permanently wasting power
  • no redundancy becuase only one SATA device can be connected (I haven’t heard about Mini PCI-Express RAID controllers so far :P )
  • Using VMware vSphere ESXi is only possible after creating a customized ISO becuase network drivers are missing
  • only one network port, no redundancy or connection to two different networks without a Layer 3 switch

The final costs of this design would have blasted my planned budget. In my opinion this design would have been only a “half-baked” solution.

Self-made server and virtualized NAS

Soll-Zustand

Soll-Zustand

Building my own servers seemed more efficient to me. Mainboards with the Intel sockets 1155, 1156 and 1150 are also available in the space saving form-factors Mini and MicroITX. If you’re using the last size you can have up to 32 GB RAM – perfect! :)

Even professional mainboards with additional features like IPMI remote management and ECC error correction are available at fair prices. I was very lucky because my friend Dennis was selling such a board inlcuding CPU, RAM and RAID controller while I was looking for adequate products. That was an offer I couldn’t deny! :D

My setup now consits of:

  • Supermicro X9SCM-F mainboard (Dual GLAN, SATA II + III, IPMI)
  • Intel Xeon E3-1230 (v1, first generation) CPU with 8 threads, 3.2 Ghz clock rate and 8 MB cache
  • 32 GB DDR3 ECC memory
  • HP P400 RAID controller with 512 MB cache and BBWU
  • LSI SAS3081E-R RAID controller without Cache
  • 80 GB Intel-SSD for VMware Flash Read Cache
  • Cisco SG300-20 Layer 3 switch (for encapsulating the Raspberry Pi’s in a DMZ VLAN)
CPU-Benchmark

CPU-Benchmark

The E3-1230 CPU already received two updates (1230v2 und 1230v3) and also a Haswell refresh (1231) but the extra charge wasn’t worth it for me. I found no online shop advertising an equivalent setup to a comparable price. :D

If I had to buy new hardware I would have chosen the 1230v3 – I’m already using this CPU in my workstation and I’m very happy with it. Compared with the rather poor AMD Turion CPU of the HP N40L the performance improvement is even with the first 1230 generation that big that it really fits my requirements. Having the most recent generation wouldn’t have meant additional benefit.

The servers has two RAID controllers which is volitional. VMware ESXi still doesn’t support software RAID so the HP P400 controller is used. Two connected hard drives (1 TB, 7200 RPM) are used inside a RAID as datastore für virtual machines. NAS hard drives are connected to the second controller. My recent NAS was converted into a VM using P2V and accesses the hard drives using this controller which is passed into the VM using VMDirectPath I/O. To be honest I never seriously thought about virtualizing my NAS. Another possibility to connect the LUNs was passing the particular hard drives to the VM using RDM (Raw device mapping). The opinions about that are very controversial in the internet – many prefer to use RDM, many other prefer to pass the whole controller. I relied to the personal experiences of Dennis who was succesful with the last solution.

After using the virtualized NAS solution for one week I have to say that this works pretty fine. Converting the physical system in a virtual machine was done quickly using the VMware vCenter Converter. In combination with the two network uplinks and the more powerful CPU I was able to increase the data throughput of Samba shares. While the old system only offered at about 70 MB/s throughput in the internal network the new system made it up to at about 115 MB/s. Using some Samba TCP optimizations it might also be possible to increase this value even more.

The only thing that was missing was an adequate case. When I mimized my hardware setup two years ago I also ordered a smaller rack that fits more in my flat. So the case size was limited which made it hard to find a case. My requirements were:

  • MicroATX form-factor
  • decent design
  • room for 6x 3.5″ hard drives
  • hard drives cases with rubber bearing if possible

In the beginning I was in love with the Fractal Design Define Mini. The case looked very nice but didn’t fit in my rack. After additional researches I finally bought the Lian-Li PC-V358B.

I really like the case concept – designed as a generous HTPC case it offers enough room for my hard drives and can be easily maintained thanks to a intelligent swing mechanism. Another thing I want to mention is that you don’t need any tools to remove the particular case parts (side walls, hard drive cages, etc.). The side walls have tiny but stable mount pins (see gallery). The case looks very sophisticated and high-class which might be the reason to the rather high price of at about 150 Euros. Luckily I was able to buy a second choice exemplar in the Alternate outlet for roughly 100 Euros. I couldn’t find any faults like scrapes which made me very happy. Buying an alternative case would have made it necessary to buy a SATA backplane – so the final price would have been comparable to the Lian-Li case.

So if you’re looking for a beautiful, compact and high quality case you really should have a look at the Lian-Li PC-V358B! :)

Photos

Some pictures of the new setup:

:)

Download PDF

Manage Solaris with Spacewalk und Red Hat Satellite

Download PDF

Beside Linux systems also Oracle Solaris hosts can be managed using Spacewalk and Red Hat Satellite – a cool feature that is often forgotten.

Bigger companies which are reliant on proprietary Unices due to roadmaps or political reasons might be interested in this force migration purposes. Red Hat takes up the cause of facilitate migrations using this interface. It seems like it was planned to also support other proprietary Unices like IBM AIX or HP-UX – I suppose that because the Red Hat Satellite documentation always mentions an generic “Unix” and not a particular one. Maybe the support was dropped due to the lack of interest – but that’s only my personal (and arbitrary) assumption.

Management functions

But – how can SUN / Oracle Solaris systems be managed at all using Spacewalk or Red Hat Satellite?

Basically a Solaris systems acts like a Linux host under Spacewalk and Red Hat Satellite – it is also integrated into system(group) and has a software base channel and optionally also sub-channels available. Unlike Enterprise Linux, Fedora or SUSE channels packages cannot be directly imported or synchronized – a “push” from a Solaris system is necessary. It also necessary to convert downloaded Solaris packages (*.pkg) to MPM archives (*.mpm) before importing them. Amongst others these archives are consisting of the actual software package and additional information like a description.

On conventional Linux systems commands are submitted in almost real-time. For this a server/client software called OSAD (Open Source Architecture Daemon) is used in combination with the XMPP protocol (Jabber). This option is not available on Solaris – it is necessary to use rhnsd (Red Hat Network daemon) which checks in periodic intervals for scheduled tasks. That’s not a beautiful solution – but it works.

Remote commands can also be executed as usual. For this additional permissions needs to be assigned like on Linux systems.

Additional limitations are:

  • Remote commands are not always working as expected (depending on architecture and release)
  • Hardware information cannot be obtained (error message: “Invalid function call attempted”)
  • Incorrect architectures are displayed while listing installed software packages (SPARC instead of i386)

Official Solaris 8 to 10 (for SPARC and x86) are supported – but I was able to also manage Solaris 11 and OpenIndiana. It should also be possible to also manage the free SunOS distribution OpenSolaris and all derivates that are based on Illumos (OpenIndiana, napp-it, SmartOS, etc.) because they are farther based on the code of Solaris 11.

Preparation

A setting in the backend needs to be done so that Spacewalk or Red Hat Satellite is able to deal with the proprietary Unix. There is a checkbox ”Solaris Support” which needs to be selected – you will it beneath Admin > Spacewalk/Satellite Konfiguration > General in the menu:

Solaris SupportThis change needs the application to be restarted – after the restart the server components required to manage Solaris systems are ready.

# rhn-satellite restart
# spacewalk-service restart

It is a good idea to create a dedicated activation key to make sure that the system can be registered comfortably. A even better idea is to link this key afterwards to a software channel which also needs to be created.

First the channel is created. This is done using the web interface – just navigate to “Channels > Manage Software Channels > create new channel“. Enter the following information in the following form:

  • Channel Name: e.g. Solaris 11
  • Channel Label: e.g. solaris-11
  • Parent Channel: None
  • Architecture: i386 Solaris or Sparc Solaris
  • Channel Summary: e.g. “Solaris 11 packages”

Afterwards the activation key is created underneath “Systems > Activation Keys > create new key” using the following settings:

  • Description: e.g. Solaris11-Key
  • Base Channels: e.g. Solaris 11

The generated activation key is used for registrating systems afterwards.

Installation

For management you will need to install some Python tools on the client system – you can find those tools for Spacewalk on the official website: http://spacewalk.redhat.com/solaris. Users of the commercial Satellite Server can retrieve these packages directly from their own system: http://fqdn-satellite.domain.loc/pub/bootstrap.

The packages are divided depending on the Solaris release and architecture – there are packages for Solaris 8 to 10 for SPARC and x86. There is no official tarball for Solaris 11 but I was able to successfully install the Solaris 10 tarball.

Before you start you need to make sure that the Solaris OpenSSL and ZIP libraries and the GCC runtime is installed:

# pkginfo|egrep -i "zlib|openssl|gccruntime"
system      SUNWgccruntime          GCC Runtime libraries
system      SUNWopensslr            OpenSSL Libraries (Root)
system      SUNWzlib                The Zip compression library

On OpenIndiana systems the GCC runtime package is named gcc-libstdc and can easily be installed using the pkg frontend:

# pkg install gcc-libstdc

These packages are normally on the official installation media or – for older releases – on OpenCSW.

The tarball is copied using SCP or TFTP (if SSH is not available) to the system and extracted before contained packages are installed:

# gzip -d rhn-solaris-bootstrap*.tar.gz
# tar xf rhn-solaris-bootstrap*.tar
# cd rhn-solaris-bootstrap-*
# ls -1
README
RHATossl-0.9.7a-33.26.rhn.9.sol9.i386.pkg
RHATpossl-0.6-1.p24.6.i386.pkg
RHATpythn-2.4.1-4.rhn.6.sol10.pkg
RHATrcfg-5.1.0-3.pkg
RHATrcfga-5.1.0-3.pkg
RHATrcfgc-5.1.0-3.pkg
RHATrcfgm-5.1.0-3.pkg
RHATrhnc-5.3.0-21.pkg
RHATrhnl-1.8-7.p23.pkg
RHATrpush-5.3.1-5.pkg
RHATsmart-5.4.1-2.i386.pkg
SMClibgcc-3.4.1-sol9-intel.pkg
# for i in *.pkg ; do pkgadd -d $i all; done

Afterwards it is necessary to alter paths for LD shared libraries – this step differs on Solaris 10, 11 and OpenIndiana:

solaris11 # crle -l /lib -l /usr/lib -l /usr/local/lib -l /usr/srw/lib -l /opt/redhat/rhn/solaris/lib
solaris10 # crle -l /lib -l /usr/lib -l /usr/local/lib -l /opt/redhat/rhn/solaris/lib
oi # crle -l /lib -l /usr/lib -l /opt/redhat/rhn/solaris/lib

Already known paths are expanded by a new path (/opt/redhat/rhn/solaris/lib) which contains the crypto, ssl and python libraries.

After that you need to alter your user profile (~/.profile) so that the recently added RHN command are available:

vi ~/.profile
...
PATH=$PATH:/opt/redhat/rhn/solaris/bin
PATH=$PATH:/opt/redhat/rhn/solaris/usr/bin
PATH=$PATH:/opt/redhat/rhn/solaris/usr/sbin
MANPATH=$MANPATH:/opt/redhat/rhn/solaris/man
export PATH
export MANPATH

ESC ZZ

The next step is to customizing the up2date configuration like on Linux systems. Primarily the Spacewalk / Satellite URL and the SSL certificate path needs to be customized. You can download the SSL certificate from the pub folder of the management system to the client system using wget or TFTP (if wget and SSH are not available):

# wget --no-check-certificate https://fqdn-satellite.domain.loc/pub/RHN-ORG-TRUSTED-SSL-CERT -O /opt/redhat/rhn/solaris/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT
# cd /opt/redhat/rhn/solaris/etc/sysconfig/rhn/
# vi up2date
...
noSSLServerURL=http://fqdn-satellite.domain.loc/XMLRPC
...
serverURL=https://fqdn-satellite.domain.loc/XMLRPC
...
sslCACert=/opt/redhat/rhn/solaris/usr/share/rhn/RHN-ORG-TRUSTED-SSL-CERT

ESC ZZ

After that the system can be registered. Because the rhn_register command is not available on Solaris you will need to register a system using the previously created activation key:

# rhnreg_ks --activationkey=x-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
 Doing checkNeedUpdate
 Updating cache...               ######################################## [100%]
 Updating cache...               ######################################## [100%]
 Package list refresh successful

To make sure that executing remote commands is possible you need to grant additional rights – using the following command:

# rhn-actions-control --enable-run

On older Solaris versions the command rhn-actions-control might not be available – in this case the following commands will do the same:

# mkdir -p /opt/redhat/rhn/solaris/etc/sysconfig/rhn/allowed-actions/script
# touch /opt/redhat/rhn/solaris/etc/sysconfig/rhn/allowed-actions/script/run

If you’re also planning to deploy configuration files to the system you’ll need to grant even more rights:

# rhn-actions-control --enable-deploy

On older Solaris versions you might need to run the following commands:

# mkdir -p /opt/redhat/rhn/solaris/etc/sysconfig/rhn/allowed-actions/configfiles
# touch /opt/redhat/rhn/solaris/etc/sysconfig/rhn/allowed-actions/configfiles/all

Afterwards the host is added to the system list and waits for management tasks:

Solaris-Systeme

Solaris-Systeme

Service controlling using SMF

To make sure that the host can be controlled using Spacewalk or Red Hat Satellite rhnsd (Red Hat Network Daemon) needs to be running. You can ensure this by typing the following command:

# /opt/redhat/rhn/solaris/usr/sbin/rhnsd --foreground --interval=10 -v

The parameter –interval is very important in this case – it defines the time interval (in minutes) in which the daemon is checking for pending tasks (e.g. package installation or remote command). I set this value in this case for testing purposes to 10 (= 10 minutes). Depending on your system landscape you might want to change this value.

This solution is not good because you will have to restart the application after every reboot manually – an automatic start would be much more comfortable.

Unlike other Unices Solaris starting from version 10 uses a technology called SMF (Service Management Facility) instead of an conventional Init system. The advantages are:

  • parallel starting of processes, faster boot
  • easier definition of dependencies to other services
  • automatic restart after errors

SMF services are defined using XML documents – the so-called SMF manifest. You can find a whitepaper about creating a manifest for the PostgreSQL database server on the Oracle website. If you’re too lazy to read the documentation you might fell in love with the Python tool manifold. Using this tiny helper you can easily create SMF manifests using an assistant.

I used this tool to create SMF manifest. The tool needs some additional Python modules that can be installed easily by using a tool called setuptools. So before installing manifold you should install setuptools (see also https://pypi.python.org/pypi/setuptools#unix-wget):

# wget https://bitbucket.org/pypa/setuptools/raw/bootstrap/ez_setup.py -O - | python
# easy_install Manifold
# #oder:
# wget --no-check-certificate https://pypi.python.org/packages/source/M/Manifold/Manifold-0.2.0.tar.gz
# tar xfz Manifold-0.2.0.tar.gz ; cd Manifold-0.2.0
# python setup.py install

Now it is possible to create the manifest (inputs are bold):

# manifold rhnsd.xml

The service category (example: 'site' or '/application/database') [site]

The name of the service, which follows the service category
   (example: 'myapp') [] rhnsd

The version of the service manifest (example: '1') [1]

The human readable name of the service
   (example: 'My service.') [] Red Hat Network Daemon

Can this service run multiple instances (yes/no) [no] ?

Full path to a config file; leave blank if no config file
  required (example: '/etc/myservice.conf') [] /opt/redhat/rhn/solaris/etc/sysconfig/rhn/up2date

The full command to start the service; may contain
  '%{config_file}' to substitute the configuration file
   (example: '/usr/bin/myservice %{config_file}') [] /opt/redhat/rhn/solaris/usr/sbin/rhnsd --foreground --interval=10 -v

The full command to stop the service; may specify ':kill' to let
  SMF kill the service processes automatically
   (example: '/usr/bin/myservice_ctl stop' or ':kill' to let SMF kill
  the service processes automatically) [:kill]

Choose a process management model:
  'wait'      : long-running process that runs in the foreground (default)
  'contract'  : long-running process that daemonizes or forks itself
                (i.e. start command returns immediately)
  'transient' : short-lived process, performs an action and ends quickly
   [wait]

Does this service depend on the network being ready (yes/no) [yes] ?

Does this service depend on the local filesystems being ready (yes/no) [yes] ?

Should the service be enabled by default (yes/no) [no] ? yes

The user to change to when executing the
  start/stop/refresh methods (example: 'webservd') [] root

The group to change to when executing the
  start/stop/refresh methods (example: 'webservd') [] root

Manifest written to rhnsd.xml
You can validate the XML file with "svccfg validate rhnsd.xml"
And create the SMF service with "svccfg import rhnsd.xml"

You can also download my manifest on Github and validate and import it:

# wget https://raw.githubusercontent.com/stdevel/rhnsd-solman/master/rhnsd.xml
# svccfg validate rhnsd.xml
# svccfg import rhnsd.xml

Now the service can be activated – rhnsd is executed instantly (enable means activating and starting!):

# svcadm enable rhnsd
# ps -ef|grep -i rhn
    root  6306    11   0 17:19:32 ?           0:00 /opt/redhat/rhn/solaris/usr/sbin/rhnsd --foreground --interval=10 -v

The service now is now searching for pending tasks every 10 minutes and executes them.

“Pushing” packages

Like mentioned above Solaris software packages need to be converted into MPM packages using solaris2mpm before they can be distributed using Spacewalk or Red Hat Satellite.

The following example demonstrates this task for the web-based management tool Webmin (I found no other simple software that has a still active Solaris support) in Version 1.680. The package is downloaded, converted and uploaded to the management server:

# wget http://prdownloads.sourceforge.net/webadmin/webmin-1.680.pkg.gz
# gzip -d webmin-1.680.pkg.gz
# solaris2mpm --select-arch=i386 webmin-1.680.pkg
Opening archive, this may take a while
Writing WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.mpm
# rhnpush -v --server fqdn-spacewalk.domain.loc --username admin -c solaris-11 *.mpm
Connecting to http://fqdn-spacewalk.domain.loc/APP
Red Hat Network password:
Package WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.mpm Not Found on RHN Server -- Uploading
Uploading package WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.mpm
Using POST request

You might want to use the parameter –select-arch because otherwise SPARC packages (–select-arch=sparc) are created always -  such a package cannot be installed on the Intel platform (–select-arch=i386).

Solaris-Pakete

Solaris-Pakete

Afterwards the package is ready an can be installed using the web interface. Because of the lack of the OSAD service you need to wait 10 minutes – or start the installation manually by running rhn_check:

# rhn_check -v
Installing packages [[['WSwebmin', '1.680', '1_PSTAMP_Jamie_Cameron', 'i386-solaris', 'solaris-11'], {}]]
Updating cache...

Computing transaction...
Fetching packages...                                                                                                                                                                                     
-> rhn://solaris-11/WSwebmin/1.680/1_PSTAMP_Jamie_Cameron/i386-solaris/WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.pkg                                                                 
WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.pkg

Committing transaction...
pkgadd -a /opt/redhat/rhn/solaris/var/lib/smart/adminfile -n -d /opt/redhat/rhn/solaris/var/lib/smart/packages/WSwebmin-1.680-1_PSTAMP_Jamie_Cameron.i386-solaris.pkg WSwebmin
Installing WSwebmin

Updating cache...

Package list refresh successful
Doing checkNeedUpdate
Updating cache...

Package list refresh successful

Webmin should be installed and listening on TCP port 10000:

# telnet localhost 10000
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Having a look at the web browser should also show Webmin:

Webmin unter Solaris

Webmin unter Solaris

Conclusion

It is possible to comfortably manage several Solaris derivates in the same way like Linux hosts using Spacewalk and Red Hat Satellite. Especially in mixed environments this can help you to reduce the maintenance effort. Packages and configuration files can be managed centrally and deployed time-saving. Even though there are some minor technical limitations (see above) it is possible to manage bigger amounts of Solaris hosts efficiently.

So if you have to manage plenty of Solaris and Linux systems and wan’t to reduce the maintenance effort you might want to have a deeper look at the Solaris support of Spacewalk and Red Hat Satellite. :)

Download PDF

Monkey web server + PHP5 + SQLite on Raspbian

Download PDF

Using Raspbian it is possible to convert a Raspberry Pi into a full-featured Webserver with PHP support. Using SQLite the embedded web server is also capable of serving database-driven web applications.

First of all the official Monkey APT repository needs to be included – this can done very easily by addind a line into the configuration file /etc/apt/sources.list:

# echo "deb http://packages.monkey-project.com/primates_pi primates_pi main" >> /etc/apt/sources.list

The following commands are updating the APT cache and installing Monkey including SSL- and FastCGI support (needed for PHP) and some other plugins:

# apt-get update
# apt-get install monkey{,-auth,-dirlisting,-fastcgi,-liana,-logger,-mandril,-polarssl}

Afterwards PHP5 including FastCGI Process Manager and SQLite3 support is installed:

# apt-get install php5{,-fpm,-cgi,-sqlite} sqlite3

After that it is necessary to enable and configure the Monkey FastCGI module. Because Monkey offers no native PHP support FastCGI establishes a connection to PHP FastCGI Process Manager. This software communicates with the PHP interpreter and returns the generated content to the web server. The module is activated by adding a line in the configuration file /etc/monkey/plugins.load:

# vi /etc/monkey/plugins.load
...
    # FASTCGI
    # =======
    # Adds FastCGI proxy support.
    #
    Load /usr/lib/monkey/monkey-fastcgi.so
...

ESC ZZ

The module is configurated afterwards in a dedicated configuration file in /etc/monkey/plugins/fastcgi.

The first category (FASTCGI_SERVER) contains information about the PHP FastCGI Process Manage. In my setup the default values were already correct. If you are having issues you should have a look whether the socket (/var/run/php5-fpm.sock in this case) exists.

Access limitations can be defined in the second category (FASTCGI_LOCATION). Basically it is a good idea to enable executing PHP files only for particular subdirectories. Rules are defined in regular expressions – in this case executing all PHP files is allowed for testing purposes. To make debugging easier unique names are definied in the particular categories (ServerName, LocationName).

In my setup I had the issue that PHP applications were not executed after some time. To fix this I altered the MaxConnections  and KeepAlive settings.

# vi /etc/monkey/plugins/fastcgi/fastcgi.conf
...
[FASTCGI_SERVER]
        ServerName php5-fpm1
        ServerPath /var/run/php5-fpm.sock
        MaxConnections 5

[FASTCGI_LOCATION]
        LocationName php5_location
        ServerNames php5-fpm1
        KeepAlive On
        Match /*.php

ESC ZZ

After restarting the Monkey service PHP is availble:

# service monkey restart

If you are planning to use this system as public web server you really should secure the system:

# apt-get install fail2ban iptables-persistent aide

Of course firewall rules, Fail2Ban and AIDE configurations needs to be customized. :)

Download PDF

Spacewalk / Red Hat Satellite / SUSE Manager package action fails: “empty transaction [[6]]”

Download PDF

It is possible that the following error message is displayed on client systems when software packages are distributed using Spacewalk, Red Hat Satellite or SUSE Manager:

Error while executing packages action: empty transaction [[6]]

This issue rests on the database of the management suite. After a while (or after a database schemata update) the cached packages information of the particular client systems are invalid. Already installed updates are provided – of course installing them fails.

I had this effect multiple times in combination with Spacewalk and Red Hat Satellite. I hadn’t hat this issue with SUSE Manager but I’m not using this software productively anyway. Bascially this guide manual will also work for SUSE Manager.

A approach to solve the issue is to update the Client profile:

# rhn-profile-sync -v

If this fails (which mostly happened for me) another approach is to delete the RHN caches on the management system. To do this it is necessary to stop the services and delete a directory:

satellite # rhn-satellite stop
susemgr_spacewalk # spacewalk-service stop
# rm -rf /var/cache/rhn/*

Beyond that it is necessary to temporarily change a variable in the RHN configuration file before starting the service.

# vi /etc/rhn/rhn.conf
...
#fix incorrect repodata - DON'T forget to reset to 1!
user_db_repodata=0

ESC ZZ

satellite # rhn-satellite start
susemgr_spacewalk # spacewalk-service start

Updating the cache can take up to one hour. In the meantime it is possible that the package managers of registered client systems are reporting errors like that:

# yum update
...
cannot retrive repository metadata (repod.xml) for repository: xyz

Afterwards the RHN configuration file is altered again:

# vi /etc/rhn/rhn.conf
...
#fix incorrect repodata - DON'T forget to reset to 1!
user_db_repodata=1

ESC ZZ

It might be necessary to update the profile of the particular registered client systems to report the correct updates:

# rhn-profile-sync -v

In my case the appropriate correct patches were displayed afterwards. :)

Download PDF

arsa – archive and remove old Spacewalk/Red Hat Satellite/SUSE Manager actions

Download PDF

If you’re maintaining your system landscape with Spacewalk, Red Hat Satellite or SUSE Manager you might also see many old entries while having a look in the action list after a while:

Zahlreiche Spacewalk-Aktionen

Every task triggered using the web interface is registered as action – after a while this list is growing rapidly. I’m not very familiar with the database design of the software suites mentioned above but I think it’s basically a good idea to clean this up sometimes.

I was unable to find a button for this in the web interface – but a very good documented API for many programming languages (e.g. Perl, Python and Ruby): [click me!]

So I read the API and wrote a little Python script for this: arsa (archive spacewalk actions).

The script can be downloaded at no charge on GitHub: [click me!]

Notes and examples

By default the scripts prompts for login credentials to the management server (default: localhost). If you need to automate this you can use the following shell variables:

  • SATELLITE_LOGIN – username
  • SATELLITE_PASSWORD – appropriate password

Another possibility is to create a file containing the username (first line) and the password (second line) and hand it to the script (parameter -a / –authfile). The file needs to have permissions 0600.

See above for some examples.

Listing all completed actions (login credentials are prompted):

$ ./arsa.py -l
Username: mylogin
Password:
things I'd like to clean (completed):
-------------------------------------
action #1494 ('Remote Command on mymachine.localdomain.loc.')

Removing all completed and archived actions (login credentials are provided by shell variables):

$ SATELLITE_LOGIN=mylogin SATELLITE_PASSWORD=mypass ./arsa.py -r
Archving action #1494 ('Remote Command on mymachine.localdomain.loc.')...
Deleting action #1494 ('Remote Command on mymachine.localdomain.loc.')...
Deleting action #1493 ('Remote Command on myothermachine.localdomain.loc.')...

The internal help lists additional parameters:

$ ./arsa.py -h
Usage: arsa.py [options]

arsa.py is used to archive completed actions and remove archived actions on
Spacewalk, Red Hat Satellite and SUSE Manager. Login credentials are assigned
using the following shell variables:
SATELLITE_LOGIN  username                 SATELLITE_PASSWORD  password
It is also possible to create an authfile (permissions 0600) for usage with
this script. The first line needs to contain the username, the second line
should consist of the appropriate password. If you're not defining variables
or an authfile you will be prompted to enter your login information.
Checkout the GitHub page for updates: https://github.com/stdevel/arsa

Options:
  --version             show program's version number and exit
  -h, --help            show this help message and exit
  -a FILE, --authfile=FILE
                        defines an auth file to use instead of shell variables
  -s SERVER, --server=SERVER
                        defines the server to use
  -q, --quiet           don't print status messages to stdout
  -d, --debug           enable debugging outputs
  -r, --remove          archives completed actions and removes all archived
                        actions
  -l, --list-only       only lists actions that would be archived

Feedback is welcome! :)

Download PDF