Mac OS X – report after switching 60 days ago

Download PDF

Attentive readers of my blog or Twitter feed might have seen that I’m also spending my time on OS X relevant topics for two months. The reason for this is that I switched the operating system of my notebook from Microsoft Windows to Mac OS X.


To be honest there was no crucial reason for switching. Since my childhood I was using Microsoft Windows (since version 3.1) – and after a short trip with Linux desktops I returned to Microsoft Windows again. I was interested in a radical change after being frustrated about my last tests with Linux on the desktop. Because I have made first positive experiences with OS X on a Mac Mini I decided to replace my Thinkpad with a MacBook. Changing the hardware would have been necessary anyway as the T420s wasn’t sufficient for my needs anymore.

Hardware selection



After I had a look at the MacBook Air I finally decided to buy a MacBook Pro Retina 13,3. The main reason for the MacBook Air would have been the attractive price beginning at 900 euro – unfortunately the low memory of 4 GB and the low-density display are quite unalluring. Luckily I found an online offer of a MacBook Pro with Retina display and a much better hardware configuration. The extra charge was quite small and worth it. My device is equipped with:

  • 13,3 inch Retina display with 2560×1600 pixel resolution
  • Intel Core i5 processor (i5-4258U, 2,4 Ghz, 3 MB L3)
  • 8 GB memory
  • 256 GB SSD
mStand + Thunderbolt Dock

mStand + Thunderbolt Dock

For using at home I also bought a Elgato Thunderbolt docking station which is a good expansion to the Rain mStand. The dock connects the MacBook with my gigabit network, keyboard, mouse and a DELL U2713H screen. This screen replaced my two Samsung SyncMaster 2433LW screens. After working for two weeks exclusively on the Retina display I was shocked when I turned on the 5 years old Samsung screens again – I really wanted to have something with a adequate resolution. Because the new screen’s resolution matches the Retina display I don’t need a second screen anymore. As a result I even have more space on my desk. If I need more temporary screen space I can add the MacBook screen to expand the desktop.

Software alternatives

The hardest thing while changing the operating system is selecting the software. Especially when you were using one particular platform for couple of years it might be hard to find adequate alternatives. Fortunately a part of my software (Firefox, Opera, Citrix, Skype, VLC,…) is platform-independent so that I didn’t need to search alternatives. I found alternatives for the remaining software:

  • Microsoft Office 2010 => Microsoft Office for Mac 2011 (*)
  • VMware Workstation => VMware Fusion (*)
  • WinRAR => UnrarX
  • Texmaker => TeXnicle
  • Camtasia => Camtasia 2 for Mac (*)
  • KeePass => KyPass Companion (*)
  • OpenVPN => Viscocity (*)
  • Photoshop => GIMP
  • Filezilla => Cyberduck

(*) = with costs

For some applications it was necessary to buy additional licenses. Beyond that I found the following applications and tools very useful:

  • Airmail (with costs) – in my opinion the best mail client for OS X
  • Android File Transfer – OS X doesn’t support MTP, using the tool it is possible to exchange files between smartphone and computer
  • iStat – displaying battery/network/cpu/memory/battery statistics
  • Microsoft Remote Desktop – official Microsoft RDP client

I don’t need dedicated software for some use-cases anymore. I really missed the CalDAV/CardDav support in Microsoft Office – there were many third-party applications to fill this gap but I wasn’t successful with those. As a result I managed my contacts and calendars only on my smartphone and tablet. OS X offers a great native integration for this use-case – I’m able to use my Baikal database furthermore.


Eingedocktes MacBook Pro

Eingedocktes MacBook Pro

After about 2 months I became familiar with OS X. Problems mostly occur only while switching between the particular operating systems – e.g. when I’m using my business computer running Microsoft Windows after the weekend.

I’m very happy with the MacBook Pro’s hardware – especially the Retina display. Unlike my apprehension the slightly glossy display isn’t annoying at all. At this point I’d like to mention that „glossy“ is not equal to „glossy“. When I hear “glossy screens” I’m thinking about the panels that are built into low-budget notebooks. The Retiny display offers a much greater screen – the panel is slightly glossy but outdoor usage is possible anyway. The battery life is much greater in comparsion with my Thinkpad – I think the age is not the only reason for this. Another thing I really like is the nearly silent cooling – my T420s was often annoying me. As a Thinkpad fan the legendary keyboard was always my first choice – but I have to admit that the Apple keyboard is at least comparable. The key-stroke is comfortable and tipping on it is quite nice. Illuminated keyboards are open to dispute – for me the backlight is a adequate replacement for the Thinklight. A Thinkpad feature I’m really missing is the docking port. Indeed the Thunderbolt dock is also a docking station but I need to connect two cables: the power and the Thunderbord cable. For the Thinkpad is was sufficient to click the notebook into place. You also need to be willing to compromise concerning the expandability when switching from a conventional notebook. All my previous notebook were equipped with a SSD (for the operating system and applications) and a conventional hard drive for „unimportant“ files (music library, virtual machines, etc.). This option is not available for the MacBook which means that you need to balance between a SSD or a hard drive. As a result I bought a MacBook with a adequate SSD and moved rarely used data to my NAS.

For me OS X is a good compromise between a „simply working desktop“ and an unixoid operating system which offers a complete set of Unix software for power users. Linux would have also been interesting for me but unfortunately my recent tests showed me that Linux can’t be a serious option for me for the next years.

Download PDF

VMworld 2014 Barcelona

Download PDF

Last week the european VMworld event was taking place in Gran Via, Barcelona. The program of the 4-day conference was characterised by virtualization and especially VMware products. VMware hosts two events every year – in the USA and Europe.

I had the honour to join this event as press member which gave me the possibility to gather many impressions. For me it was the first VMworld at all – but the definitely not the last one. :)

The program is very extensive – visitors may choose between:

  • more than 400 talks (partially hands-on) around VMware technologies
  • Exhibitor exchange with at about 100 vendors
  • Certification exams at lower prices
  • the possibility to talk to VMware experts

The agenda was disversified – the only problem was to schedule everything. As a little helper VMware published a smartphone application for Android and iOS devices for scheduling talks. Beside this the app is also capable of browsing the exhibition vendor list and sharing messages and photos using Twitter.

The numerous chill-out lounges invited for establishing contacts.


The keynote was introduced by a cool dance /light act (starting at 1:02):

The agenda was characterised by new products around cloud technology and Software Defined Data Center – some topics were:

  • vCloud Air
  • vRealize Suite
  • Horizon FLEX
  • vSphere 6

The presentation was moderated by well-known faces from the VMware management:

  • Maurizio Carli (EMEA Senior Vice President and General Manager)
  • Pat Gelsinger (CEO)
  • Bill Fathers (Vice President and General Manager Hybrid Cloud)
  • Sanjay Poonen (Vice President and General Manager End-User-Computing)
  • Kit Colbert (CTO End-User-Computing)

vCloud Air

vCloud Air is a cloud service that was previously called vCloud Hybrid Service. This service uses vSphere technology and integrates seamlessly in pre-existing VMware customer infrastructure. The service’s prime benefit is to implement dedicated and redundant cloud ressources considering the compliance of defined security and availability rules. Customers are able to use vCloud Air ressources as desaster recovery infrastructure which is mostly cheaper than additional hardware.

VMware names that vCloud Air offers doubled computing power than Microsoft Azure and tripled storage performance than Amazon Web Services. In comparsion with those competitor products the VMware alternative is predicted to be even cheaper. More details about the comparsion can be found in a VMware blog post: [click me!]

Currently more than 5000 applications and 90 operating systems are supported for vCloud Air.

As a highlight a new vCloud Air data center for central europe in Germany was announced. There was a huge demand for such a data center. According to VMware a data center in Germany is a good choice because of strong security and privacy policies. The data center is predicted to be ready-to-use beginning quarter 1 2015.

vRealize Suite

VMware also announced vRealize Suite – a suite for managing hybrid clouds. It is designed to mange:

  • VMware vSphere, Virtual SAN, NSX
  • Microsoft Azure
  • Amazon Web Services
  • other hypervisors, e.g. Hyper-V, XEN and KVM

Automation and operations management are core functionalities of the software. An important product function is a self-service-portal. Using operations management you’re able to manage:

  • performance
  • capacities
  • configurations
  • security governance
  • and logs

Depending on the use-case there are two editions: advanced and enterprise.

More product information can be found on the VMware website: [click me!]

Horizon FLEX

Horizon FLEX is a interesting solution for BYOD scenarios. The product’s use-case is implementing centrally managed virtual machines for remote users.

VMs can be created and managed centrally. Administrators are providing customized VM templates that can be seen by end-users using the Horizon FLEX Client (which is available for Microsoft Windows and Mac OS X). End-users can run these VMs using VMware Player Pro and VMware Fusion Pro.

VM’s rules are implemented centrally using the Horizon FLEX Policy Server. Some of the management tasks are patch and backup maintenance. It is possible to harden VMs – e.g. it’s possible to ensure remote locks and define expiration dates. VMs provided by Horizon FLEX can also be used without a network connection. Using this product it is possible to ensure that even remote users are following company security rules for virtual machines.


Airwatch is a Enterprise mobile device management suite that is able to manage devices, applications, workspaces and data contents. Unlike other alternative products Airwatch manages smartphones and tablets as well as conventional laptops and desktops.

Currently the following operating systems are supported:

  • Microsoft Windows
  • Mac OS X
  • Android
  • Apple iOS
  • BlackBerry OS
  • Symbian

During the keynote a interesting use-case was presented – there is a recording of this demo on the official VMworld YouTube channel (starting at 1:34:00):


In my opinion the most interesting update was EVO:RAIL – an appliance that is offered by VMware and certified partners. EVO:RAIL is a infrastructure concept (HCIA, Hyper-Converged Infrastructure) that works great for medium-sized businesses and smaller private clouds.

But why? Because it offers 4 independent physical servers, Virtual SAN storage and remarkable CPU and memory ressources in a small form-factor (2U rack case). Each servers consists at least of:

  • 2x Intel E5-2620 v2/3-Prozessoren (6 cores/12 threads, 15 MB cache, 2,1 Ghz)
  • 192 GB memory
  • 3x SAS hard drives with 1,2 TB storage capacity (10K RPM)
  • 400 GB enterprise SSD (MLC) for read/write caching (VSAN)
  • 2x 10 Gbit network adapter (SFP+ or RJ45)
  • IPMI remote management

In summary one appliance offers overall 100 Ghz CPU power, 768 GB memory and 16 TB VSAN storage capacity. Currently it is possible to easily combine up to 4 appliances as “scale-out setup” to have even more ressources.

Beside the hardware the following software products are also part of the appliance:

  • vSphere Enterprise Plus
  • vCenter Server
  • vCenter Log Insight
  • Virtual SAN

This collection is rounded by a new developed management interface called MARVIN. The core components are:

  • Zeroconf/mDNS
  • HTML5/CSS3
  • JavaScript

No Adobe Flash technology is needed anymore – all recent web-browsers are supported. As a highlight MARVIN configures the whole infrastructure automatically. To install an EVO:RAIL appliance it is sufficient to start a configuration wizard and enter IP addresses and host names of the particular servers. The remaining configuration (ESXi/cluster configuration, VSAN initialization,…) is done in the background. After at about 20 minutes the appliance is ready and can run the first virtual machine.

Customers don’t need to buy dedicated software licenses or hardware support – the appliance can be bought including all the required subscriptions and support from one of the certified EVO:RAIL partners. This makes it much easier to keep maintenance contracts in mind. Currently customers can choose between the following vendors based on their preferences:

  • DELL
  • Fujitsu
  • Hitachi Data Systems
  • Supermicro
  • Inspur
  • NetOne
  • Hewlett-Packard

Because the hardware design is quite the same the customer’s vendor selection is irrelevant – the product is the same. By the way, EVO stands for Evolutionary and RAIL names the Rail mount form-factor. At VMworld another concept named EVO:RACK was presented. In comparsion with EVO:RAIL this concept covers multiple racks and offers greatly more ressources – which makes it suitable for implementing bigger data centers or clouds. On top the following software products are also included:

  • vCloud Suite (vRealize)
  • NSX
  • EVO:RACK Manager

It was said that a complete EVO:RACK data center is ready after at about 2 hours.

I really like the EVO concept. It is limited to the essential components of a virtual infrastructure, it’s very scalable and offers a great level of automation. In my YouTube clip you can find some detail shots of the appliance: [click me!]

vSphere 6

A product many VMware customers are waiting for is vSphere 6. The next major release of the popular hypervisor offers many interesting updates.

For adventurous administrators there are beta versionen that can be downloaded after registration: [click me!]

The beta is protected by a non-disclosure agreement which means that I’m not allowed to list the particular product news. If you’re interested in the details you really should register and give the beta a try. After the registration you’re allowed to access appropriate boards that discussing the product.. :)

Because of the non-disclosure agreement there were no new details for me at VMworld. I also had the possibility to talk to some VMware employees – but of course they also weren’t allowed to name further details.

So we need to keep calm and wait until vSphere 6 is released.


I really enjoyed the interesting event. The numerous talks and product demos included many new things I can use profitably. The location in Barcelona is just great – I’m really excited for the next event. It was a honour for me to join the event as a press member.

I highly recommend this event to every administrator that is using (or thinking about to use) VMware products! :)

Video and photos

I published a short review video about the event on YouTube: [click me!]

Some photos of the event:

Download PDF

Install EMC NetWorker agent in VMware vCenter Server Appliance 5.5

Download PDF

When running a virtualized vCenter server it is very important to have a working backup. If the vCenter server crashes the virtual landscape cannot be managed or monitored.

Configuring backup is very easy when running VMware vCenter Server on a conventional Microsoft Windows servern because it is a fully-features operating system. In case you’re using the VMware vCenter Server Appliance configuring backup might be more complex. By default this system doesn’t come with pre-installed backup agents because it is assumed that a “agentless” backup solution for virtual machines is used.

Not every company uses backup solutions that are offering “agentless” backups. If this function is missing the backup is missing as well – which is very unfavorable for productive environments. For example – if you’re using EMC NetWorker in an older version agentless backups aren’t possible.

The vCSA is based on SUSE Linux Enterprise Server 11 SP2 which means that you can extend its software using RPM packages – if your backup software comes also in a adequate format. Unlike many other virtual appliances I saw recently it is still possible to gain root access to the system. So it is basically possible to installed additional needed software – of course this is not covered by the VMware support.

Installation of EMC NetWorker

EMC generates generic NetWorker RPMs for RPM-based Linux distros. Amongst others this software package is compatible with Red Hat Enterprise Linux and SUSE Linux Enterprise Server – which means that it is suitable for installation inside the vCSA.

It is necessary to resolve some dependencies for EMC NetWorker. The following software packages need to be installed (might differ depending on your NetWorker version):

  • libcap1*.x86_64.rpm
  • libstdc++33*.x86_64.rpm
  • ksh-93u*.x86_64.rpm

And where do you get these packages from? Bascially you’ll need a valid subscription for downloading SLES packages. SUSE also offers trial versions for their own enterprise products – including SLES: [klick mich!]

Owners of vSphere Standard or a higher edition also had the possibility to get product patches and updates (ended 07/25/2014, see here) at no fee in the “SUSE Linux Enterprise Server for VMware” programme. To use this it was needed to register the ESXi serial number at SUSE. Afterwards it was possible to activate installations by using activation codes: [click me!]

After a short registration it is possible to download a test version including 60 days free update support. Basically only the first DVD is needed – underneath the folder “suse/x86_64” the required RPM packages can be found. It recommended to download the SLES release SP2 the vCSA is based on instead of SP3.

For installing the EMC NetWorker agent the packages lgtoman and lgtoclient are sufficient. The RPM packages extracted from the DVD are copied to the vCSA using SSH/SCP and installed together with the agent:

# zypper localinstall lib*.rpm ksh*.rpm lgtoclnt*.rpm lgtoman*.rpm

Service configuration

Before NetWorker is started for the first time it is important to create the required folder structure and a list of valid backup servers. If NetWorker is installed before those information were served the agent isn’t working properly and might be reinstalled (because the erroneous information are cached).

# mkdir -p /nsr/res
# echo "backupserver.fqdn.loc" > /nsr/res/servers
# chmod -R 755 /nsr/*
# chkconfig rpcbind on
# chkconfig networker on
# service networker start
starting NetWorker daemons:

The services rpcbind and networker need to be active to make sure that backups can be created:

# service rpcbind status
Checking for service rpcbind                              running
# service networker status
+--o nsrexecd (PID)

Backup scripts

In many companies it is very common to create offline backups weekly, e.g. at the weekend. Machines are often backed up “online” daily which means that application services aren’t stopped. As a result not all data can be copied consistently because some files (e.g. database files) are in use. These files are backed up during the offline backup.

For controlling offline backups using EMC NetWorker shell scripts are created. Those scripts are executed before and after runing the backup job.

For creating the backup scripts I was guided by the following VMware KB articles:

My scripts:

# vi /opt/
service vmware-vpxd stop
service vmware-inventoryservice stop
/opt/vmware/vpostgres/1.0/bin/pg_dump INSTANCE -U USER -Fp -c > /tmp/VCDBackUp
/usr/lib/vmware-vpx/inventoryservice/scripts/ -file /tmp/InventoryServiceDB.DB


# vi /opt/
service vmware-vpxd start
service vmware-inventoryservice start


# chmod +x /opt/st*

The variables INSTANCE and USER need to be customized – the appropriate values can be gathered from the configuration file /etc/vmware-vpx/embedded_db.cfg. It is also necessary to check whether the file system /tmp offers enough storage capacity for holding the backups.

Finally I’d like to mention that this procedure is working pretty fine but it is not covered by the VMware support. It is a good idea to revert those changes before opening a support case (or installing appliance updates!).

Download PDF

POODLE – and how to get rid of it

Download PDF

A couple of days ago another security vulnerability that applies to Linux systems called PODDLE was announced. Less serious than Heartbleed especially web servers that are still allowing SSL generations 2 and 3 are affected. Because of a bad security design it is possible to decrypt transfered data. Often those protocol versions are allowed in the default configuration shipped by many Linux distributions – so administrators should really harden their servers. In the meantime CVE 2014-3566 was created to describe POODLE – the security vulnerability was detected by Google.
To fix the issue it is sufficient to simply disable the older protocol generations. For Apache this is done by altering the appropriate configuration file:

#SSLProtocol All
SSLProtocol All -SSLv2 -SSLv3

This directive enables all SSL protocol versions except the 2. and 3. generation.

Poodle Protector

If you’re maintaining a big amount of systems manual configuring the affected systems also means unnecessary work that can be automated very easily. Because I’m a lazy person I developed a script which can analyse and automatically customize the configuration of Apache servers vulnerable to the POODLE attack. The script (poodle_protector) can be found on GitHub: [click me!]

The script can also restart the appropriate service which makes it really comfortable to use in combination with a central configuration management (like Red Hat Satellite, Spacewalk or SUSE Manager).

The following command analyses the system and simulates which changes would be made (dry-run):

# ./ -l
I'd like to create a backup of '/etc/apache2/mods-available/ssl.conf as '/etc/apache2/mods-available/ssl.conf.20141016-1303' ...
I'd like to insert 'SSLProtocol All -SSLv2 -SSLv3' into /etc/apache2/mods-available/ssl.conf using the following command: sed -i '/SSLProtocol/ c\SSLProtocol All -SSLv2 -SSLv3' /etc/apache2/mods-available/ssl.conf ...
I'd also like to restart the service using: ['service httpd restart', 'service apache2 restart']

The next command alters the configuration (backups are created before) and restarts the Apache service:

# ./ -r
httpd: unrecognized service
Restarting web server: apache2 ... waiting .

In this example a Debian system was used – that’s why the httpd service can’t be found.

Download PDF

Red Hat Enterprise Linux 6.6 released

Download PDF

Yesterday Red Hat released another update of its Red Hat Enterprise Linux major release 6: 6.6. Like for the minor updates before many optimizations and some „technical previews“ were implemented.

The changes have been documented well in the release and technical notes:

Beside common kernel driver updates some other interesting customizations were made – I’d like to list some of those:

  • Installation as Hyper-V second generation VM – e.g. under Windows Server 2012 R2 (*), this also includes new Hyper-V Daemons: Hyper-V KVP (Hyper-V Key Value Pair) Hyper-V VSS (Hyper-V Volume Shadow Copy Service)
  • Improved support of additional SCSI signals for better hardware change handling using udev (e.g. size changes, thin-provisioning status, adding new LUNs,…)
  • Implemented a Open vSwitch module for additional Red Hat products, support is only offered in combination with other Red Hat products
  • For device mapper a caching modul (dm-cache) was introduced (*). This module can use faster drives (e.g. SSDs) as cache for slower storage media – details can be found in the lvmcache manpage
  • The software packages keepalived and haproxy are now fully covered by the Red Hat support
  • OpenJDK 8 Java Runtime Environment can be installed optionally (*)
  • Windows 8-certified touch screens are now supported by hid-multitouch
  • Red Hat Enterprise Linux 6.6 is now NSS FIPS-140 Level-1 certified
  • System Security Services Daemon (SSSD) was optimized for better authentification in combination with Microsoft Active Directory
  • gdisk – new tool for GPT partitioniing, the „look and feel“ is alike fdisk
  • rsyslog7 – new, overhauled rsyslog version witn improved encryption and external database support (MySQL, PostgreSQL,…). It is recommended to migrate to this most recent version

(*) = technical preview

Red Hat Enterprise Linux 6.6 is now available to all Red Hat customers with a valid subscription.

Download PDF

Short tip: uninstall pkg applications under OS X

Download PDF

The Apple App Store is not the only way to installation additional applications under OS X – another way is to install .pkg files (which have been created by the Apple Installer Framework).

Unfortunately not all applications installed using these files are present in the “Programs” category inside Finder. Uninstalling these applications using drag & drop inside the recycle bin is impossible.

But there is a tiny tool called General Package Uninstaller – it can be downloaded on GitHub. The program lists installed applications – and uninstalls them:

General Package Uninstaller

General Package Uninstaller

Download PDF

Virtualized cluster storage using VMware ESXi

Download PDF

Many software cluster solutions require shared storage between all cluster nodes. A very prominent example for this might be Oracle RAC (Real Application Clusters). In enterprise environments shared cluster storage is often implemented using SAN storage that is connected to multiple systems.

For (private) test scenarios a SAN storage system might exceed the budget. Fortunately there is a possibility to provided virtualized shared storage. If you’re using VMware ESXi the keyboard for this is “multi-writer“. ESXi uses its own file system called VMFS for local and iSCSI storage – this file system automatically creates locks to make sure that particular files cannot be accessed from multiple virtual machines (unless you’re using Fault Tolerance). By disabling this behavior it is possible to access virtual hard drives (.vmdk files) parallely from up to 8 virtual machines.

The advantages and disadvantages of this configuration are described very detailed in a VMware knowledge base article – I’d like to thank my colleague Johannes who recommend this solution. :)

In every case using a cluster lock manager (e.g. dlm) between the appropriate virtual machines is mandatory – otherwise the parallel access causes data inconsistencies. Implementing the storage between two or multiple machines is quite easy:

  1. Adding a “Thick provisioned – Eager zeroed” virtual hard drive to the first virtual machine. The hard drive needs to reside on a iSCSI or – if all the virtual machines are running on the same host – local SCSI/SAS storage, NFS is not supported! You may want to use the same SCSI-ID on all virtual machines.
  2. Turning off all concerned virtual machines.
  3. Modifying the .vmx configuration file of the first virtual machine by addind the following parameter:scsiX:Y.sharing = “multi-writer”

    The SCSI-ID needs to be set to match the previously created cluster hard drive. For customizing the configuration file you might want to use vSphere client or vi over SSH – I was not successful with setting the parameter using the Web client multiple times.

  4. Adding a pre-existing hard drive to the other virtual machines by specifying the path to the hard drive created previously.
  5. Anpassen der .vmx-Konfigurationsdateien analog zu Schritt 3.

Afterwards the virtual machines can be started. After this customiztation the following ESXi functions and features aren’t supported anymore:

  • snapshots and online expansion of the shared hard drive
  • hibernating the virtual machine
  • cloning
  • Storage vMotion
  • Changed Block Tracking (important for “agentless” backup solutions)
  • vSphere Flash Read Cache

Afterwards the cluster can be configured to use the new hard drive. If you don’t have a cluster yet you can also temporarily create and mount a conventional file system on the particular nodes:

nodeA # mkfs.ext4 /dev/sdb
nodeA # mkdir /cluster ; mount /dev/sdb /cluster
nodeA # echo "mimimi" > /cluster/test
nodeA # umount /cluster
nodeB # mkdir /cluster ; mount /dev/sdb /cluster
nodeB # cat /cluster/test
nodeB # umount /cluster

I’d like to repeat that the parallel access of multiple virtual machines to the same hard drive without using a cluster lock manager can cause data inconsistencies very easily. With this in mind – happy vClustering! :)

Download PDF

Short tip: configure hard drive standby with systemd

Download PDF

Hard drives can be forced into hibernate using hdparm. While it was quite easy to implement automatic setting this hibernation values by inserting the appropriate command (hdparm -B intervall device) into /etc/rc.local (which was executed after the boot) on sysvinit-based Linux distributions this changed on newer systemd-based systems. It is a good idea to implement this behavior as a service.

First you need to create a system-wide service and activate and start it afterwards:

# vi /usr/lib/systemd/system/sda-spindown.service
Description=Set HDD spindown

ExecStart=/sbin/hdparm -B 241 /dev/sdb


# systemctl daemon-reload
# systemctl enable sda-spindown.service
# systemctl start sda-spindown.service

You can verify that the value has been set successfully using the service management:

# systemctl status sda-spindown.service
sda-spindown.service - Fix excessive HDD parking frequency
   Loaded: loaded (/usr/lib/systemd/system/sdd-spindown.service; enabled)
   Active: active (exited) since Sa 2014-10-11 14:27:16 CEST; 3min 32s ago
   Process: 4336 ExecStart=/sbin/hdparm -B 241 /dev/sda (code=exited, status=0/SUCCESS)

Many thanks to the following blog that gave me the tip: [click me!]


Download PDF

Short tip: Gather HP SmartArray cache battery information under HP-UX

Download PDF

When a RAID controllers cache battery fails it is a benefit if you can gather the spare part number without any downtime. If your server is equipped with a HP SmartArray controller you can easily get t hose information using the sautil utility. This requires the HP-UX products RAIDSA and RAIDSA-PROVIDER to be installed. For displaying the information the hardware path of the affected controller is gathered and passed to sautil:

# ioscan -funCext_bus
Class       I  H/W Path        Driver       S/W State   H/W Type     Description
ext_bus     1  0/6/0/0/0/0/4/0/0/0      ciss         CLAIMED     INTERFACE    PCIe SAS SmartArray P400 RAID Controller

# sautil /dev/ciss1|more
---- ARRAY ACCELERATOR (CACHE) INFORMATION -----------------------------------
  Array Accelerator Board Present?.... yes
  Cache Configuration Status.......... write cache temporarily disabled (code=1)
  Cache Ratio......................... 25% Read / 75% Write
  Total Cache Size (MB)............... 208
    Read Cache........................ 052
    Write Cache....................... 156
    Transfer Buffer................... 000
  Battery Pack Count.................. 1
  Battery Status (pack #1)............ FAILED

In this case the cache is 208 MB which is consistent with the 256 MB P400 model variant. There is also a 512 MB model with a completely different cache battery. If you want to make sure you’re ordering the right part you can check the cache size mentioned in the firmware by using the HP-UX Support Tool Manager. For this you might need to install the HP-UX product Sup-Tool-Mgr. The tool also requires you to specify the controller’s hardware path:

# echo "selclass hwpath 0/6/0/0/0/0/4/0/0/0;info;wait;infolog"|/usr/sbin/cstm|more
Controller : HP SmartArray RAID Controller
Device File: /dev/ciss1
Hardware Revision: 'E'
Firmware Revision (Currently Running): 7.22
Firmware Revision Rom: 7.22
Boot Block Revision: 0.04
Total Cache Size (MB) : 256
Battery Pack:    Battery Count : 1
Battery Status:  Battery Number ( 1 ) : BATTERY FAILED
Battery Voltage: Battery Number ( 1 ) : below reference voltage

In thie case it is really the 256 MB model variant which requires a spare part with the number 383280-B21. The 512 MB BBWC model variant needs a spare part with the unique identifier 398648-001 (sources: [klick!] und [klick!]).

Download PDF

DELL OpenManage Integration Virtual Appliance update problems

Download PDF

DELL offers an Linux appliance based on CentOS 5 for customers using VMware products. This appliance is called  “DELL OpenManage Integration for VMware vCenter Appliance” and integrates seamless in VMware vCenter Server (or the appliance). It can be used for monitoring physical servers and also allows controlling the particular servers remotely. Amongst others it is possible to update firmware versions and check warranty information.

Recently I had problems with updating the appliance using the web interface. The update was started and the appliance was restarted but the version number didn’t change at all. In an older documentation I found a hint about a log file named /usr/share/tomcat5/rpmupdate.log – the update process is recorded in this file. Because even the administrator account has no shell access to the appliance it would be necessary to use a live CD. Fortunately this effor isn’t needed because there is a button for creating troubleshooting bundles in the web interface. You can find the mentioned file rpmupdate.log in the ZIP file offered for download. (if you prefer the live CD solution the file can be found underneath /usr/share/tomcatSpectre/logs in appliance version 2.x)

rpmupdate.log in Fehlerbehebungspaket

rpmupdate.log in Fehlerbehebungspaket

In my case YUM wasn’t able to download RPM packages the protocol told very clearly:

$ cat rpmupdate.log
Stopping httpd: [  OK  ]
Loaded plugins: fastestmirror
Cleaning up Everything
Cleaning up list of fastest mirrors
Loaded plugins: fastestmirror
Determining fastest mirrors
Setting up Update Process
Resolving Dependencies
--> Running transaction check
---> Package Spectre-OMSA-Repo.i386 0:1.0.3-1 set to be updated
--> Processing Dependency: dell-omsa-esx40 >= 1.0.1 for package: Spectre-OMSA-Repo
Total download size: 610 M
Downloading Packages: [Errno 12] Timeout: 
Trying other mirror. [Errno 12] Timeout: 
Trying other mirror. [Errno 12] Timeout: 
Trying other mirror. [Errno 12] Timeout: 
Trying other mirror. [Errno 12] Timeout: 
Trying other mirror.

Error Downloading Packages:
  dell-dtk-4.0.1-1.i386: failure: spectrebinaries/dell-dtk-4.0.1-1.i386.rpm from Dell_VC_Plugin: [Errno 256] No more mirrors to try.
  dell-omsa-esxi51-1.0.1-1.i386: failure: spectrebinaries/dell-omsa-esxi51-1.0.1-1.i386.rpm from Dell_VC_Plugin: [Errno 256] No more mirrors to try.
  spectre-deps-2.2.0-255.1.i386: failure: spectre/spectre-deps-2.2.0-255.1.i386.rpm from Dell_VC_Plugin: [Errno 256] No more mirrors to try.
  kernel-2.6.18-371.8.1.el5.i686: failure: centos_updates/kernel-2.6.18-371.8.1.el5.i686.rpm from Dell_VC_Plugin: [Errno 256] No more mirrors to try.
  spectre-webclient-package-2.2.0-254.1.noarch: failure: spectre/spectre-webclient-package-2.2.0-254.1.noarch.rpm from Dell_VC_Plugin: [Errno 256] No more mirrors to try.

My network requires using a proxy server – the configuration was altered using the web interface to set a proxy server. This process was also successful like a test function acknowledged. It seems like it was forgotten to also set the proxy configuration for YUM.

To fix this issue I altered the YUM repository configured in the appliance. This customization requires a live CD because the appliance offers no shell access. Even booting into single-user mode is impossible because GRUB was protected by a password. The VM needs to boot vom a Linux CD image instead of the hard drive. Afterwards the detected hard drive is mounted and the YUM repository configuration gets altered:

# mount /dev/sda1 /crash
# vi /crash/etc/yum.repos.d/spectre.repo


I really disadvise from directly updating the appliance using YUM after doing a chroot because a update started using the web interface seems to do even more steps. In my case the appliance was successfully updated and rebooted twice after the customization. During the first reboot some packages were removed which was definitely not triggered by “yum update“:

Geskriptete Entfernung einiger RPM-Pakete

Geskriptete Entfernung einiger RPM-Pakete

After both reboots the appliance version number has changed:

Erfolgreiches Update

Erfolgreiches Update

Download PDF