mkelfs – create kickstart trees for Enterprise Linux comfortably

Download PDF

A couple of days ago I spent a lot of time with Kickstart under Spacewalk and Red Hat Satellite and was looking for a possibility for creating Kickstart Trees for CentOS comfortably.

Radomly I stumbled upon a blog article which served a useful script. That inspired me to create a more handy python script for that.

After some hours a little application came up which can be used to create Kickstart Trees for the most Enterprise Linux-like distros (e.g. CentOS, Fedora, Scientific Linux): mkelfs. :)

The tool allows specifying particular mirrors, release versions and architectures. The needed folder structures are created automatically and the files are downloaded using wget.

The script can be downloaded on Github.

Examples

Some examples:

$ mkelfs.py --release 6.5 --arch x86_64

Downloads the Kickstart files for CentOS 6.5 x86_64 from the default mirror. The files are stored underneath /var/satellite/kickstart_tree.

$ mkelfs.py --release 4.1 --arch i386 --target /var/museum/ks --mirror http://vault.centos.org

Downloads the Kickstart files for the antiquated CentOS release 4.1 i386 from the CentOS Vault mirror. The files are stored underneath /var/museum/ks.

$ mkelfs.py -r 6.4 -a x86_64 -m http://www.nic.funet.fi/pub/Linux/INSTALL/scientific -o scientific -fq

Downloads the Scientific Linux release 6.4 x86_64 from the default mirror. Pre-existing files are overwritten and no additional output is generated.

$ mkelfs.py -f -r 20 -a i386 -m http://mirror.digitalnova.at/fedora/linux -o fedora

Downloads the 32-bit kickstart files for Fedora release 20 from a Austrian mirror.

By the way, this was my first Python tool – so feel free to give me some feedback to optimize it. :)

Download PDF

Short tip: List imported RPM GPG keys

Download PDF

Sometimes it is sufficient to know which RPM GPG-Keys have been imported. You can get this information with the following command:

# rpm -qa --qf '%{VERSION}-%{RELEASE} %{SUMMARY}\n' gpg-pubkey\*
c105b9de-4e0fd3a3 gpg(CentOS-6 Key (CentOS 6 Official Signing Key) <centos-6-key@centos.org>)
0608b895-4bd22942 gpg(EPEL (6) <epel@fedoraproject.org>)
863a853d-4f55f54d gpg(Spacewalk <spacewalk-devel@redhat.com>)
66fd4949-4803fe57 gpg(VMware, Inc. -- Linux Packaging Key -- <linux-packages@vmware.com>)

The command lists all packages that are starting with the string gpg-pubkey – for each package the version, release and a summary is displayed. In the summary you might find a useful hint about the appropriate YUM repository.

Download PDF

Distribute Oracle JRE using Spacewalk, Red Hat Satellite and SUSE Manager and perform an clean installation

Download PDF

There are several OpenJDK versions for running Java applications under Enterprise Linux. This is adequate for the most applications but in some cases you might need the proprietary version by Oracle (e.g. because of support matrices of commercial third-party software).

On the JRE webseite you can find tarball and RPM package downloads.

If you are distributing the RPM package within a custom software channel using Spacewalk, Red Hat Spacewalk or SUSE Manager you will notice that there is a duplicate entry for the jre package. If you plan to install JRE afterwards the packages provided by the RHEL- / Scientific Linux- or CentOS channels are preferred.

You can fix this by using the following commando which disables all YUM repositories excepting your own (mychannel in this example) and installs the package:

# yum --disablerepo="*" --enablerepo="mychannel" install jre

You can also make this exception persistent by altering the YUM configuration:

# vi /etc/yum/pluginconf.d/rhnplugin.conf
...
[rhel-x86_64-server-6]
exclude=java-1.?.0-openjdk java-1.?.0-gcj

ESC ZZ

You will have to replace the software channel name depending on your distribution – some examples:

channel description
rhel-x86_64-server-6 RHEL 6 x86_64 base channel
centos6-base-x86_64 CentOS 6 x86_64 base channel
centos6-updates-x86_64 CentOS 6 x86_64 update channel

If you don’t know which channels are containing packages providing jre packages you might want to have a look at this command:

# yum whatprovides jre|grep -i Repo|sort -u
Repo        : centos6-base-x86_64
Repo        : centos6-updates-x86_64

After the installation you might notice that Java applications are not able to be executed. Having a look at the output of the following output tells you that there is a missing library:

# ldd $(which java)
        linux-vdso.so.1 =>  (0x00007fff677ff000)
        libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003e68800000)
        libjli.so => not found
        libdl.so.2 => /lib64/libdl.so.2 (0x0000003e68c00000)
        libc.so.6 => /lib64/libc.so.6 (0x0000003e68400000)
        /lib64/ld-linux-x86-64.so.2 (0x0000003e68000000)

Thanks to find I found out very quickly that this library is part of the RPM package but is stored in a different location than the default paths (like /usr/lib). To fix this issue you need to create a configuration file and force the library cache to be updated so that the file can be found.

# echo "/usr/java/jre1.7.0_51/lib/amd64/jli" > /etc/ld.so.conf.d/oracle-jre.conf
# ldconfig -p|head -n1
438 libs found in cache `/etc/ld.so.cache'
# ldconfig ; ldconfig -p|head -n1
439 libs found in cache `/etc/ld.so.cache'

The output of the two last commands is important – it’s no doubt that the library was detected and added to the cache.

Running Java applications should now work like a charm:

# java -version
java version "1.7.0_51"
Java(TM) SE Runtime Environment (build 1.7.0_51-b13)
Java HotSpot(TM) 64-Bit Server VM (build 24.51-b03, mixed mode)

:)

Download PDF

Short tip: Fault Tolerance – Replay is unavailable for the current configuration

Download PDF

A couple of days ago I stumbled upon the following error message while activating Fault Tolerance on a particular virtual machine:

Replay is unavailable for the current configuration.

First I wasted time by verifying the following topics:

  • checking the VM configuration (vCPUs, OS support, disk provisioning,…)
  • tested using dvSwitches instead of vSwitches
  • researching syslogs and the internet

The solution was quite simple: the virtual machine was turned on and therefore Fault Tolerance was not able to work. After shutting down the VM I was able to enable Fault Tolerance and boot the system again.

Download PDF

Short tip: create RPM GPG key for EL5 and 6

Download PDF

If you want to create and sign RPM packages for Enterprise Linux 5 and 6 you will have to consider some things while creating and using the GPG key so that EL5 systems can also use the signed packages.

If you create a GPG key using the standard settings and sign a RPM package under EL6 you will get the following error on EL5 systems:

# rpm -v --checksig mypackage.rpm
Header V4 RSA/SHA1 signature: BAD, key ID xxxxxxxx

In a blog article I found a very useful hint that RPM/GPG is not able to deal with OpenPGP V4 signatures. It is required to use the older signature V3 here.

Basically it is recommended to sign EL5 packages with at most 2048-bit RSA – you need to consider this while creating the GPG key.

$ gpg --gen-key
...
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 2048
Requested keysize is 2048 bits
PPlease specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct (y/n)? y

GnuPG needs to construct a user ID to identify your key.

Real name: Max Mustermann RPM signing key
Email address: max@mmuster.de
Comment: RPM signing key
You selected this USER-ID:
    "Max Mustermann RPM signing key (RPM signing key) "

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
...
$ gpg --export -a 'Max Mustermann RPM signing key (RPM signing key) ' > RPM-GPG-KEY-mmuster

By the way – it is not recommended to create the key in a su / sudo session becuase this will fail.

After creating the GPG keys you need to alter the file ~/.rpmmacros if you sign your packages on a EL6 system – like in my case. On EL6 systems a V4 signature is used by default – this needs to be disabled:

$ vi .rpmmacros
%_signature gpg
%_gpg_name Max Mustermann RPM signing key (RPM signing key) 
%__gpg_sign_cmd %{__gpg} \
    gpg --force-v3-sigs --digest-algo=sha1 --batch --no-verbose --no-armor \
    --passphrase-fd 3 --no-secmem-warning -u "%{_gpg_name}" \
    -sbo %{__signature_filename} %{__plaintext_filename}

If you don’t know your GPG key name you might want to have a look at the output of the following command:

pub   xxxxD/xxxxxxxx 2014-02-06
uid                  Max Mustermann RPM signing key (RPM signing key) 
sub   xxxxg/xxxxxxxx 2014-02-06

After signing the RPM package the signature should also be recognizable on EL5 systems:

EL6 $ rpm --resign mypackage.rpm
EL6 $ scp mypackage.rpm ...
EL5 $ rpm -v --checksig mypackage.rpm
    Header V3 DSA signature: OK, key ID xxxxxxxx
    Header SHA1 digest: OK (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
    MD5 digest: OK (xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx)
    V3 DSA signature: OK, key ID xxxxxxxx

:)

Download PDF

First sight at SUSE Linux Enterprise Server for VMware

Download PDF

A couple of days ago I stumbled upon something I nearly forgot: SUSE Linux Enterprise Server for VMware.

This is a slightly customized version of SUSE Linux Enterprise Server (SLES) created 2010 during a cooperation between VMware and Novell. The most interesting thing about this is that this version is completely without charge for owners of the VMware vSphere editions Standard or higher. Yes, you’re right – without any charge. Usually you’ll have to buy subscriptions in order to get patches and updates for SLES. This is inapplicable to VMware customers becuase the patch and update entitlements are bound to the vSphere subscription. This means: as long as vSphere is maintained SLES installations will also be maintained.

There are no installation limits – you can install as much SLES guests as possible. This can be interesting because you don’t have the typical limitations per virtual guest. But there are some installation roles that are understandably missing in comparsion with the conventional SLES:

  • Infiniband (OFED)
  • KVM Virtual Machine Host Server
  • Xen Virtual Machine Host Server

Like the conventional SLES ext4 is also not supported as installation target (see gallery) – even after the installation ext4 is only supported read-only. I don’t know the reason for this but I’d prefer ext4 instead of btrfs support.

The offering only refers to update and patch subscriptions – telephone and mail support for SLES can be bought optionally at VMware Global Support Services. I found no prices in the internet – you shouldn’t forget that if you’re comparing the prices.

To use SUSE Linux Enterprise Server for VMware you only need to fill out an online form: [click me!]. You need to enter the serial numbers of the particular ESXi hosts – afterwards activation codes are generated. You can enter those codes during the instalaltion.

This offer is interesting for medium and big companies because you might save some money by dropping the conventional subscriptions. Such companies typically use at least vSphere Standard anyway if VMware products are used. In this case the subscription is an additional “goodie“. Small companies often buy the popular Essential kits – the extra charge isn’t always profitable.

The only thing that might be declared as disadvantage is that the support is obtained at VMware and not SUSE – but I don’t have personal experiences about that. If the VMware support for SLES is like the support for vSphere (which really satisfied me so far!) this can’t be a disadvantage. :)

Attached some scrrenshots of a test installation:

Download PDF

VMware vSphere Mobile Watchlist

Download PDF

VMware published a very useful tiny helper named vSphere Mobile Watchlist. The smartphone application is designed to be used for monitoring and controlling running virtual machines.

To avoid beeing overwhelmed by a endless list of virtual machines (of course this depends on the size of your virtual environment) the most important virtual machines can be combined in a “watchlist“. Using this you always have your favorites in a single view.

The particular VMs can be powered on/off and hibernated remotely. The VM console cannot be accessed directly at the moment but at least the application is able to show a screenshot of the recent state so that you can see whether a VM crashed because of a bluescreen or Kernel panic. So it remains to be seen whether VMware upgrades the application in the future – that’s the only feature by now that I miss. :)

The android application is free (no ads!) and can be downloaded from the Google Playstore. ESXi or vCenter Version 5.0 or highter is required, free users are also getting their money’s worth. It’s highly recommended for everyone using VMware vSphere! :)

Attached some screenshots of the app:

:)

Download PDF

Migrating vCenter Server Appliance data partitions to LVM volumes

Download PDF

Recently I discovered the following hint in the vCenter service state overview:

Ldap backup task monitor warning

Ldap backup task monitor warning

Apparently there was a issue with the integrated LDAP service of the vCenter Server Appliance – unfortunately I was not very successful when researching the internet with the appropriate error message:

LDAP data backup subcomponent error: JoinTool operation status: FAILED

A first sight at the system gave a interesting hint regarding the possible fault cause:

# df -h|grep log
/dev/sdb2        20G   20G  0  100% /storage/log

Conditional on the centralized collecting of syslogs from all connected ESXi hosts the storage ran out of capacity. The solution was quite simple: cleaning the hard drive solved the issue and the warning in the vCenter disappeared.

Time to strongly fix the primary fault cause: customizing the insufficient partition.

The vCSA partitioning layout

The vCSA consists of two hard drives:

  • Hard drive 1 (SCSI 0:0): 25 GB – operating system (SLES 11 SP2)
  • Hard drive 2 (SCSI 0:1): 100 GB – database and logs/dumps

Unfortunately LVM was not used when designing the partitioning layout which means that altering the partitions sizes needs downtime of the application. Beyond that I have very different experiences with manually altering conventional partitioning layouts. In that case LVM would give multiple edges:

  • more flexible storage distribution (no manual altering of partition tables required, etc.)
  • Volumes can be expanded online (no stopping of the VMware services needed)
  • clearly easier and “nicer” solution

By default the layout looks like that:

# df -h|grep -i sd
/dev/sda3       9.8G  3.8G  5.5G  41% /
/dev/sda1       128M   21M  101M  18% /boot
/dev/sdb1        20G  3.4G   16G  18% /storage/core
/dev/sdb2        20G   16G  3.7G  81% /storage/log
/dev/sdb3        60G  1.8G   55G   4% /storage/db

Beside a swap partition the first hard drive consists of a boot and root partition. The second hard drive (100 GB) is divided in a big database partition and two smaller partitions for coredumps and logs. The last one is often undersized: it only offers 20 GB.

The missing LVM design can be catched up very quickly.

One important hint in advance: the following concept needs also application downtime for the first implementation but prospective expansions can be done “online“. Beyond that my trick is currently (march 2014) not yet supported by VMware. I created a feature request for the VMware engineering team – I’m currently expecting an answer.

First of all an additional hard drive is added to the VM – I’ve also chosen 100 GB for that. Basically this size is quite enough (see above) but the default layout isn’t always very reasonable. In my case I’m using the vCenter with the little database setup because I only have to manage a little environment – so I don’t need the designed 60 GB for database files (/storage/db). If you’re managing a bigger environment you might need more storage capacity here. In my opinion you should not or only under reserve resize the partition for coredumps (/storage/core) – if you’re having serious issues with the vCenter Server Appliance you might not be able to create complete coredumps anymore.

My idea to optimize the layout looks like that (please keep in mind that you might need to alter my layout depending on your system landscape!):

  • partition new hard drive (100 GB) as LVM physical volume
  • create new LVM volume group “vg_storage
  • create new LVM logical volumes “lv_core” (20 GB), “lv_log” (40 GB) and “lv_db” (40 GB)
  • create ext3 (like before) filesystem on the previously created LVM logical volumes and mount them temporarily
  • switch into single user-mode and copy data files
  • alter /etc/fstab

By the way: I’m using the VMware vCenter Server Appliance version 5.5 – but probably this procedure can also be adopted on older versions. I highly recommend creating a clone (or at least a snapshot) of the virtual machine before. It can also be a good idea to create a backup of the database and application configuration in accordance with the following KB articles:

It is important to have a deeper look at my layout and alter it if needed. Ideally you’ll have a look at the storage usage of your second hard drive. I’m not responsible for any kind of problems because of insufficient volume sizes.

First of all you’ll have to make sure that the system based on SUSE Linux Enterprise Server 11 SP2 is able to detect and activate LVM volumes at boot time – otherwise the next boot will crash with fsck which is not able to check the previously created volumes. For this you need to customize the following configuration file:

# cp /etc/sysconfig/lvm /etc/sysconfig/lvm.initial
# vi /etc/sysconfig/lvm
...
#LVM_ACTIVATED_ON_DISCOVERED="disable"
LVM_ACTIVATED_ON_DISCOVERED="enable"

This forces every detected LVM volume to be activated during the system boot before file systems mentioned in /etc/fstab are mounted. If you want to you can limit this behavior to the later created volume group by setting the following parameter:

#LVM_VGS_ACTIVATED_ON_BOOT=""
LVM_VGS_ACTIVATED_ON_BOOT="vg_storage"
...
LVM_ACTIVATED_ON_DISCOVERED="disable"

This makes sure that only the created LVM volume group (vg_storage) is activated during the boot process – other volume groups are ignored.

On the recently added 100 GB hard drive a LVM partition is created and initialized:


# fdisk /dev/sdc <<EOF
n
p
1

t
8e
p
w
EOF

After that the LVM volume group and logical volumes are created. I reduced the size of the database volume to 40 GB and doubled the capacity of the log volume:

# vgcreate --name vg_storage /dev/sdc1
# lvcreate --size 20G --name lv_core vg_storage
# lvcreate --size 40G --name lv_log vg_storage
# lvcreate --extents 100%FREE --name lv_db vg_storage

Afterwards ext3 file systems are created on the new logical volumes:

# mkfs.ext3 /dev/mapper/vg_storage/lv_core
# mkfs.ext3 /dev/mapper/vg_storage/lv_log
# mkfs.ext3 /dev/mapper/vg_storage/lv_db

It is a good idea to update the initial ramdisk to make sure that the kernel is capable of using the lvm2 while booting the system:

# mkinitrd_setup
# file /lib/mkinitrd/scripts/setup-lvm2.sh
# mkinitrd -f lvm2
Scanning scripts ...
Resolve dependencies ...
Install symlinks in /lib/mkinitrd/setup ...
Install symlinks in /lib/mkinitrd/boot ...

Kernel image:   /boot/vmlinuz-3.0.101-0.5-default
Initrd image:   /boot/initrd-3.0.101-0.5-default
Root device:    /dev/sda3 /mounted on / as ext3)
Resume device:  /dev/sda2
Kernel Modules: hwmon thermal_sys thermal processor fan scsi_mod scsi_transport_spi mptbase mptscsih mptspi libara ata_piix ata_generic vmxnet scsi_dh scsi_dh_rdac scsi_dh_alua scsi_dh_hp_sw scsi_dh_emc mbcache jbd ext3 crc-t10dif sd_mod
Features:       acpi dm block lvm2
27535 blocks

After that temporary mount point are created and the new file systems are mounted:

# mkdir -p /new/{core,log,db}
# mount /dev/mapper/vg_storage-lv_core /new/core
# mount /dev/mapper/vg_storage-lv_log /new/log
# mount /dev/mapper/vg_storage-lv_db /new/db

I adopted a VMware KB article (2056764 – “Increase the disk space in vCenter Server Appliance”) for copying the data. First you’ll have to switch into single user-mode to make sure that no VMware services are running and no file locks are preventing the application to be in a consistent state. Afterwards the files are copied using cp and /etc/fstab is altered (after creating a backup). At last the temporary mount points are deleted and the partitions are mounted:

# init 1
# cp -a /storage/db/* /new/db
# cp -a /storage/log/* /new/log
# cp -a /storage/core/* /new/core
# umount /{new,storage}/{db,log,core}
# cp /etc/fstab /etc/fstab.copy
# sed -i -e 's#/dev/sdb1#/dev/mapper/vg_storage-lv_core#' /etc/fstab
# sed -i -e 's#/dev/sdb2#/dev/mapper/vg_storage-lv_log#' /etc/fstab
# sed -i -e 's#/dev/sdb3#/dev/mapper/vg_storage-lv_db#' /etc/fstab
# mount /storage/{db,log,core}
# rmdir -p /new/{db,core,log}

Having a look at the output of df shows that this customization was applied:

# df -h|grep storage
/dev/mapper/vg_storage-lv_core   20G  3.4G   16G  18% /stoage/core
/dev/mapper/vg_storage-lv_log    40G   15G   23G  40% /stoage/log
/dev/mapper/vg_storage-lv_db     40G  1.7G   36G   5% /stoage/db

Switching into the default runlevel 3 should be no problem:

# init 3

It is a good idea to try making a reboot to make sure that the new LVM volumes are also available after the next boot. After making sure that VMware vCenter still works like a charm you can delete the formerly data hard drive from the virtual machine configuration.

Download PDF

Spacewalk: Fatal error in Python code occurred [[6]]

Download PDF

During a recent system update that I started using Spacewalk I stumbled upon the following error message:

Fatal error in Python code occurred [[6]]

After some troubleshooting I came up that I upgraded my spacewalk server to the most recenet release a couple of days ago. During this update I forgot to update also the Spacewalk client repository.

The solution was quite simple:

In another blog article I demonstrated how automatically update software channel contents using a cronjob – of course this script also needs to be altered:

# cat /etc/cron.daily/spacewalk_sync.cron
...
/usr/bin/spacewalk-repo-sync --channel spacewalk-client-x86_64 \
                             --url http://yum.spacewalkproject.org/2.1-client/RHEL/6/x86_64/ \
                             --type yum -c spacewalk-client-x86_64 >/dev/null

ESC ZZ

After a update of the appropriate Spacewalk client applications the next remote update worked like a charm.

Download PDF

First sight at Spacewalk 2.1

Download PDF

A couple of days ago I stumbled upon a blog article by Duncan Mac-Vicar – it was all about a very interesting concept to modernize the web interface of Spacewalk. With great interest I have taken note of that a combination of defacto standards like Twitter Bootstrap, jQuery and HTML5 was used to pretty up the frumpy interface. Later I found out that these changes have already been applied in Spacewalk 2.1 – I totally missed that, what a pity! :(

Further changes and hints

Beside the new “responsive” web interface there were plenty of other enhancements like a new date picker and a passwort-meter which displays the strength of a defined password.

The integrated OpenSCAP system has been extended by some new functions, at about 200 bugs were fixed. Application and script developers will love to hear that the Spacewalk API now comes with 10 new calls. Spacewalk 2.1 will be the last version that is supported to be installed on a EL5 system (RHEL, CentOS, Scientific Linux, OEL) – Fedora 18 is already unsupported as host system.

The modern web interface will soon be ported to the source code base of SUSE Manager.

Of course I immediately started upgrading my local playground. There is a informative Upgrade guide in the wiki linked on the Spacewalk homepage – you might want to have a look there: https://fedorahosted.org/spacewalk/wiki/HowToUpgrade

The new web interface

Like mentioned above the new web interface was modernized and glamed up using HTML5, Bootstrap and jQuery – it is now “responsive” which enables it to be used acceptably on mobile devices (smartphone, tablet). Here are some screenshots of the web interface on my Google Nexus 7:

I really appreciate this update! The new interface looks more clean and well-designed than the elderly look. I’m really interested to see what the implementation in SUSE Manager will look like and whether Red Hat Satellite will also get a new design. :)

Download PDF