Short tip: Oppo Find 5 and short system storage

Download PDF

After a recent update to CyanogenMod 12 (Android 5.x) I realized that the storage for apps is quite tight. The reason for this is that the recommended GAPPS (Google Apps) need a lot of space. Beyond some useful applications this package also includes much “crap” that is unnecessary. You might run out of space if you install some big apps like Facebook.

The Find 5 offers 2 GB storage for system applications and other apps which is quite tight (at the release date this was sufficient). This limit also applies to the 32 GB version. The reason for this is that Oppo decided to split the internal flash in two partitions – some vendors are doing other implementations.

One approach to fix this is to re-partition the flash storage. xda-developers offers a tutorial for this. I decided to go for an alternative as re-partitioning is quite risky.

While the full-features GAPPS package included much bloatware there is also a spin-off called Banks Gapps. This signed package only includes the necessary Google services and the Play Store. Other Google applications like YouTube or Gmail are missing and need to be installed if needed. Using this you can save valuable storage. As it is hard to remove a previously installed GAPPS package I decided to reinstall the ROM.

After that upgrading my apps worked like a charm. :)

Download PDF

SUSE Linux Expert Day 2015 Frankfurt

Download PDF

This week one of the SUSE Linux Expert Day events that are taking place worldwide since september 2014 was hosted at 25hours Hotel Frankfurt by Levi’s – a very stylish and fancy location!

Agenda

The event was planned for 5 hours and consisted of many interesting talks:

  • Keynote – Michael Jores, Regional Director Central Europe SUSE
  • SUSE Roadmap – Olaf Kirch, Director SUSE Linux Enterprise
  • SUSE Linux Enterprise Server 12 – Lars Pinne, Senior Systems Engineer SUSE
  • SUSE Cloud Überblick & Ausblick – Lars Pinne, Senior Systems Engineer SUSE
  • Kundenreferenz: FIS-ASP GmbH, Matthias Braun, FIS-ASP GmbH
  • Wrap-up, Q&A – Martin Wolf, Account Executive Team Lead SUSE

SUSE roadmap

Right after the keynote the plans of further development of current SUSE products was presented by Olaf Kirch. Of this planning is not binding but it already shows development trends.

After SUSE Linux Enterprise 12 was released in October 2014 the update SP4 for the previous major release 11 should be released in June/July this year. After common kernel driver updates (if the don’t need updating the API/ABI) support for the IBM z13, POWER8 BE and Intel Haswell EX plattforms (basic support only) are planned. In accordance with the current planning this service pack will be the last – there are no plans for SP5.

SUSE Linux Enterprise Server 12 is currently being certified for ERP software components by SAP. A official approval is planned for Q1/2015 so that customers can utilize the full product portfolio on the most recent SLES version whive profiting by the support of SAP and SUSE.

SUSE’s own cloud suite SUSE Cloud will be released in version 5 in the first half of 2015. Amongst high-available virtual guests also Docker containers should be supported as technical preview. These containers are also planned to be administrable in combination with later SUSE Manager versions. I really hope that this feature will also become part of the Spacewalk upstream project. For the third quarter another update for SUSE cloud based on OpenStackKilo” is planned. This version should include the possibility to run the Control Nodes needed by OpenStadt on SLES 12. For 2016 SUSE Cloud 7 based on OpenStack 7 is planned.

With their product SUSE Storage Server the Nurembergs are looking forward to become successful in the software-defined storage market. It is focussed on customers looking forward to implement Private or Hybrid Cloud technology or researching for low or mid-performance storages replacing conventional SAN systems. The core component of this product is Ceph which is popular for its scaling, data deduplication and self-healing capabilities. Faulty hard drives are detected and – depending on your setup – replaced automatically by spare mediums. Three important components of the Ceph concept are:

  • Object storage – distributed, object-based storage, access over C, C++, Java, Python, PHP or RESTful interfaces, implementing striping and snapshot functionalities
  • Block storage – object storage can also be served as thin-provisioned block storage (e.g. for VMs)
  • File system – POSIX-compatible file system with direct access to object storage, official Linux kernel integration since 2010 (2.6.34), optionally there is also a FUSE client

While selected customers had access to a beta version last year a first official version based on Ceph “Firefly” will be published in the first quarter this year. The programm version 2.0 based on Ceph “Hammer” is planned for Q3/2015. While the servers needs to be SLES 12 it is also supported to use SLE 11 on connected clients.

I was happy when I heard that SUSE has no plans for replacing Spacewalk as upstream project for the management suite SUSE Manager. For Satellite 6 Red Hat did a hard cut and switched the code base. For this it is mandatory to know that Satellite 6 is not an update for Satellite 5.x – it is a completely redesigned product. For 2016 SUSE Manager 3 including SLES 12 SP1 support and high availability and monitoring functions are planned. I’m very excited to see how SUSE will revise the basic monitoring functions of Spacewalk. I never used them by now because the function range was not sufficient to me. These plans are a good indicator that Spacewalk will benefit of further development even though it is not pathbreaking to Red Hat anymore. For 2017 the availability of SUSE Manager 4 estimated.

SUSE Linux Enterprise 12

SysVinit vs. Systemd

SysVinit vs. Systemd

After 5 years a new major release of SUSE Linux Enterprise was released in October 2014.

In comparsion with the previous version some big changes were made – Systemd might be the biggest and most controversial discussed change. During the last year this update which replaced SysVinit in the most distros forced disquietness in the Linux scene. This was also visible on this event – the typical “Systemd vs. SysVinit” discussion was not missing.

I have to admit that I’m tired of this topic in the meantime. For some time hotly discussions in the internet that are often ending in “shit storms” and personal attacks (e.g. there were aid appeals to “stop” Lennart Poettering on his work) are the order of the day. “Projects” like Devuan Linux are adding fuel to the fire again and again. In my opinion a referee summarized this very good analogously like this:

It is the same every 10 – 15 years. When RC scripts were replaced by SysVinit everybody found it horrible even though there were no significant disadvantages. By now nobody wants to miss it and now this is repeated for Systemd.

I agree to this. Systemd is a radical but modern change the brings a lot of advantages. Because technical innovation also means re-thinking and new capabilities this first hurdle can also be a disadvantage. Enterprise distros like SUSE Linux Enterprise or Red Hat Enterprise Linux are also offering optional possibilites to use well-known tools (e.g. service, chkconfig, old configuration files) to customers to make transition easier. Independent of this I expected more openness and tolerance by the Linux scene.

In SUSE Linux Enterprise 12 two radical changes were made: Systemd and dropping the 32-bit Intel architecture i686. Pure 32-bit systems are not in great demand anymore – there are rarely arguments justifying a preference for 32-bit systems against 64-bit alternatives. For the first time SLES 12 will also not support Intel Itanium-based systems (ia64) anymore. With this SUSE performs a cut that other distributors like Red Hat already did in the past. The incompetitive processor architecture is not in great demand anymore – it is not an reference architecture for Linux customers anymore for a long time.

With Xen, KVM (Kernel-based Virtual Machine) and LXC (Linux Containers) there are three virtualization possibilities. For container applications Docker is included.

Btrfs (ButterFS) is the new default file system in SLE12 which is completely covered by the SUSE support (if you are only using the default file system features configurable using YaST) and offers additional features like event-driven snapshots. Using this Zypper is able to create snapshots before system updates – if the system is not able to boot anymore after the update it is possible to start the last snapshot using GRUB. It has become proven to use BtrFS for the operating system and XFS for data partitions (e.g. MySQL database files). ext4 is now also supported in write mode – SLE11 only supported read-only access. The reason for this was that this file system could not be tested well enough to fully support it for productional environments. It was also mentioned that ext4 became secondary to SUSE because it offers worse performance in asynchronus IO calls in comparsion with XFS.

The central configuration utility YaST was re-designed. It now uses Ruby instead of the customized language YCP. With this it is expected to make software maintenance much easier. A new network backend called Wicked was integrated in YaST. The event-driven service focussess client systems as well as server systems and virtual machines in Hybrid Clouds (the slide title was called “Network configuration as a service” :D ). It is recommended to implement network configurations on new systems in Wicked – well-known configuration possibilities are also still supported.

Some software pacakges have been swapped out in modules by SUSE. These modules are supported for a couple of years, not for 10 to 13 years. Currently these modules include:

  • Web and scripting – PHP, Python, Ruby on Rails (3 years support)
  • Legacy – Sendmail, old Java versions, etc. (3 years support)
  • Public Cloud – Public Cloud software packages (continuous integration)
  • Toolchain – GNU Compiler Collection (1 year support per yearly release)
  • Advanced System Management – utilities and frameworks for system management (continuous integration)

With Machinery SUSE offers a technical preview for a solution migrating services of existing systems. This software analyzes system configurations, consolidates them and migrates offered services The product is focusses on migrations (z.B. from SLE11 to 12) and Hybrid Clouds. Desaster Recovery scenarios can also benefit from Machinery – currently this is not covered by the support.

Kernel live patching

This events slogan was “Towards Zero Downtime” – it is also used by the Nurembergs to advertise the kernel live patching functionality of SLES 12. Like Red Hat this feature is also sold as additional product to conclude additional support contracts. Kernel patches are distributed as RPM packages installing the modules and updating initial ramdisks. Using ftrace and the itself developed component kGraft kernel function calls are redirected to their new modules. Already executed applications don’t need to be restarted. Currently only the x86_64 platform is supported but furhter architecture should follow if customers require it. With kGraft (SUSE) and kPatch (Red Hat) there are currently two competitor open-source products to the proprietary kSplice (Oracle) using similar kernel modules to implement live patching. Im November last year a first discussion about combining those products were started by Red Hat – the decision of the upstream community will be crucial.

SUSE Cloud

SUSE Cloud combines three products representing an interesting toolkit for Private and Hybrid Clouds:

  • OpenStack – core architecture for Cloud services
  • Crowbar – Provisioning framework, uses amongst others PXE for fully automatic installations
  • Ceph – object-driven storage backend

In comparsion with alternate products the Nurembergs are also certifying testes hardware and software setups and also other hypervisors unsupported by OpenStack. Especially VMware vSphere installations are supported by SUSE. You can download a 60-days trial which implements a whole Private Cloude setup in 30 minutes on the SUSE website. In comparsion to a manual setup SUSE Cloud adopts a lot of configuration options which is interesting for customers that don’t have any experiences with OpenStack yet.

Like Red Hat also SUSE contributes a lot to the OpenStack project as a platinum member – especially financial help is important to the project.

Conclusion

For me it was the first SUSE event like this at all. SUSE willfully planned plenty of regional rather than few national events to ensure smaller number of participants and promote discussions. I can admit that I liked this about the passed event. The number of participants was manageable, it was possible to get in touch with other customers and participate in discussions. The talks were very interesting, in case of question of details it was possible to talk to experienced SUSE staff. The location was very stylish and fancy.

Download PDF

Red Hat Satellite 5.7 released

Download PDF

This week Red Hat Satellite 5.7 was released. With the ninth update of the 5.x tree many improvements from the Spacewalk development were applied.

New web interface

Übersicht

Übersicht

The most eye-catching change is the web interface. While this hasn’t changed a lot in the last 10 years the most recents developments from Spacewalk 2.1 to 2.3 were applied. In March last year Spacewalk already included a first approach of a new web interface combining modern technologies like HTML5, Bootstrap and jQuery. The current interface from Spacewalk 2.3 nightly looks even tidier and intuitive – it is has been adapted for Red Hat Satellite 5.7.

The interface is “responsive” and can also be used comfortably on devices with lower display resolutions. Especially on smartphones and tablets this is quite nice. The contemporary design is really beautiful. I was really excited when I heard that also SUSE was working on implementing the new web interface in SUSE Manager.

Action chaining

Action-Chaining

Action-Chaining

Action chaining is used for grouping interdependent maintenance tasks logically. With this a feature from Spacewalk 2.2 is adopted. Let me explain the additional benefit with a practical example:

Let’s say you want to install updates on a system which mounted the /usr partition in read-only mode. Installing the patches fails while data cannot be written on the partition. Paranoid eh security-aware administrators are implementing this protection to complicate installing software once the system was collapsed by attackers.

Before installing packages a remote command can be scheduled by Spacewalk – for example to disable the write protection. But what happens after installing the packages? Nothing. Using the web interface the administrator can choose between two approaches:

  1. Executing a remote command to revoke the write protection, updating the packages and executing another remote command enabling the write protection again. But this are three particular steps that require much time.
  2. Executing a remote command including all three steps (mount -o remount,rw /usr ; yum update -y ; mount -o remount,rw /usr). Unfortunately the comfortable possibility to comfortable select packages using the web interface is doomed.

Action chaining fits well for use-cases like this – you can combine interdependent tasks and run them bundled. Once one of the subtasks fails the complete task is failed. The example mentioned above could be implemented like this:

  1. Execute a remote command: mount -o remount,rw /usr
  2. Install/update packages
  3. Execute a remote command: mount -o remount,ro /usr

All maintenance tasks that can be executed using the interface can be assigned to a action chain. In the menu “Schedule” you can arrange ths particular steps comfortably using Drap & Drop. I’m missing the possibility to save recurring tasks as a template. Using this it would be needless to create recurring tasks as an action chain during maintenance.

Read-only users and new API calls

With the Spacewalk-API there are many possibilities to automate processes. For communication with the Spacewalk server a user with adequate permissions is needed. A new possibility is to create users that cannot use the web interface and only execute read-only API calls. This is great for third-party scripts that don’t need write access on the Spacewalk server (e.g. for scripts generating statistics).

During the implementation of new features appropriate API calls were added. Some included classes and calls:

  • channel.software.syncRep (immediate repostiroy synchronisation, previously additional manual tasks were required to do this)
  • actionchain.*
  • kickstart.profile.software.*
  • system.provisioning.snapshot.*
  • user.external.*
  • user.setReadOnly

The full API documentation can be found on the Red Hat portal: [click me!]

spacecmd

With Red Hat Satellite 5.7 the tool spacecmd is part of the software set for the first time. Previously this tool needed to be installed from the Spacewalk  or EPEL repository – using the utility was not covered by the Red Hat support. Using this program all maintenance tasks can be controlled and executed from the command line. Experienced administrators are able to save clicking and automate processes – some examples:

Installing an erratum on all affected systems:

$ spacecmd -y errata_apply CESA-2015:0016
Scheduled 5 system(s)

Listing all repositories:

$ spacecmd repo_list
centos6-base-x86_64
centos6-extras-x86_64
centos6-updates-x86_64

Listing all systems with pending patches (interactive session):

$ spacecmd
Welcome to spacecmd, a command-line interface to Spacewalk.
INFO: Spacewalk Username: admin
Spacewalk Password:

spacecmd {SSM:0}> report_outofdatesystems
System                       Packages
---------------------------  --------
dc.localdomain.loc                  8
devel.localdomain.loc              12

All available functions can be listed using the integrated help:

$ spacecmd help

Additional improvements

Some of the further improvements:

  • Using IPMI physical hosts can be turned on or off. In combination with Cobbler Provisioning can be implemented more easy.
  • Red Hat Satellite Proxy is now able to pre-download content (Proxy Precaching)
  • Connecting to a identity systemy (e.g. FreeIPA) using spacewalk-setup-ipa-authentication
  • FIPS 140-2 certification – SHA-256 instead of MD5 hashses are now possible. Certificates and passwords need to be re-generated on pre-existing installation to follow the standard.
  • External databases can now be connected using SSL
  • Appropriate client applications are now available for IBM Power Little Endian
  • Numerous bugs like the faulty jabberd init-script are fixed

The full changelog is available on the Red Hat portal.

Screenshots

Some screenshots form the recent product version:

Conclusion

With the program version 5.7 Red Hat implemented a lot of features I already felt in love with on Spacewalk. Especially the new web interface and action chaining are two valuable features. I had not expected that Red Hat implements the interface that is currently developed in Spacewalk 2.3. In comparsion with the web interface in Spacewalk 2.1 the new implementation is more useful on smartphones and tablets.

Download PDF

Monitor washing machines with Nagios / Icinga: check_gpio_pir

Download PDF

Winter time is crafting time! I spent the last couple of days on the Raspberry Pi and PIR sensors (Passive Infrared). Using this sensors it is possible to recognize motions – in combination with GPIO APIs you can create useful applications.

The first idea that came in my mind was to monitor my washing machine in the basement. Because I’m a very busy person I often forget the machine and so I thought about scanning the blinking LEDs on the front and automatically sending mails in case of status changes. Because I’m using Nagios respectively Icinga for monitoring my network anyway it was a good idea to develop an adequate plugin. :)

Setup and functionality

My setup currently consists of the following parts:

  • Raspberry Pi B+ with installed CRUX-ARM linux
  • TP-Link AV500 powerline adapter starter kit (TL-PA 4010P) because Wireless LAN is unavailable in my basement
  • 5V PIR sensor (ordered on eBay)
  • Standard LED for displaying recognized motions
  • Breadboard and flexible plug-in jumpers
Schaltung

Schaltung

I secured the network connection between my flat and the basement with Port Security to lock out unwanted network participants.

The wiring looks like this:

  • GPIO #2 => VCC PIR (5 volt)
  • GPIO #7 => OUT PIR (GPIO4)
  • GPIO #6 => GND PIR + LED (shared)
  • GPIO #11 => VCC LED (GPIO17)

The PIR sensor is fixed to the machine’s front using a provisional mount to scan the timer. Unfortunately my washing machine has no LED that displays a completed cleaning process. So I have to check whether the timer is still blinking – if not the clothes are waiting for being pegged out. This also means that I currently have to enable the monitoring explicitely because the timer is not blinking if the washing machine is turned off. I need to find a more creative solution for this. :)

Sensor calibration

PIR-Sensor

PIR-Sensor

The most PIR sensors offered on eBay have two potentiometer that control the sensor’s behavior.

The first potentiometer controls the sensitivity while the second one controls the time frame the sensor is triggered in case of recognized motions.

By default both potentiometers should be set to 50% (middle). It is recommended to tune the sensitivity to match your own setup. My plugin offers a debugging function (parameter -d / –debug) which displays recognized motions in the console. Alternatively these motions can also be visualized using a connected LED (parameter -l / –enable-led) if you don’t have a connected console.

Requirements and plugin

Python 2.x must be available on the Raspberry Pi. Beyond that my plugin requires the python module RPi.GPIO which can be downloaded for free. My plugin can be downloaded on GitHub:

# wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.8.tar.gz
# tar xfz RPi.GPIO-0.5.8.tar.gz
# cd RPi.GPIO-0.5.8
# python setup.py install
# cd
# wget https://github.com/stdevel/check_gpio_pir/archive/master.zip
# unzip master.zip
# cd check_gpio_pir-master

The plugin’s behavior can be controlled by various configuration parameters – some of them:

Parameter Explanation
-t / –seconds Time period the sensor is checked
-c / –motion-threshold Threshold of recognized motions forcing a warning
-v / –invert-match Inverting match, missing motions will create warnings

Example: Monitoring with pre-defined values (3 motions forcing a warning, 15 seconds check):

# ./check_gpio_pir.py
OK: motion counter (0) beyond threshold (3)

The full documentation including various examples can be read on GitHub.

Nagios / Icinga configuration

To integrate the plugin in Nagios respectively Icinga you need to copy it into the appropriatge plugin directory. Afterwards you need to create a command:

icinga# vi commands.cfg
...
define command{
        command_name check_local_pir
        command_line $USER2$/check_gpio_pir.py
}

ESC ZZ

If you plan to monitor remote hosts you need to define a NRPE command on the Icinga system and the remote host:

icinga# vi commands.cfg
...
define command{
        command_name check_nrpe_pir
        command_line $USER1$/check_nrpe -H $HOSTADDRESS$ -t 60 -c check_gpio_pir
}

ESC ZZ

remote# vi nrpe.cfg
...
command[check_gpio_pir]=/usr/lib/nagios/plugins/check_gpio_pir.py -v

ESC ZZ

Afterwards the sensor can be monitored comfortably! :)

Photos

Some photos of the setup:

Download PDF

Cisco SG 300: configure Port Security with MAC filtering

Download PDF

If you own a Cisco SG-200/300 switch you are lucky to configure Port Security and MAC filtering. The advantage of this is that you are able to define which MAC addresses may establish connections on particular ports. Other devices will not be able to access the network – which is a good idea especially for public network sockets.

Configuring this mechanism is quite easy – if you know the particular steps. In this example one device is configures to access a particular network port – but it is also possible to enable more than one device for accessing ports. It is a good idea to connect all devices that need to be able to establish connections to the switch while configuration.

Statische MAC-Zuordnung

Statische MAC-Zuordnung

First of all static MAC assignments need to be configured for all affected devices/network ports. This can be done using a dialog. You will find this dialog by browsing the menu underneath the items “MAC Address Tables > Static Addresses“.

In this example the dialog is filled like this:

  • MAC Address: (device MAC address)
  • Interface: Port GEx (appropriate network port)
  • Status: Secure (make sure to use this!)
Protected Port-Einstellung

Protected Port-Einstellung

After all required MAC addresses have been configured on the switch (it is also possible to configure more than one) switch port “Protection” needs to be enabled for the appropriate switch port (IMHO a better description would have been great, Cisco!). This setting can be configured by browsing the menu underneath “Port Management > Port Settings“. During the port configuration the checkbox “Protected Port” needs to be set to “Enable“. After that the the port table column “Protected Port” should contain the value “Protected“:

Protected Port

Protected Port

Afterwards Port Security needs to be configured for that affected switch ports. This configuration is made by editing the appropriate switch port underneath the menu “Security > Port Security” editiert. In this example the dialog is filled like this:

Port Security-Einstellungen

Port Security-Einstellungen

  • Interface Status: Lock (otherwise Port Security isn’t enabled – sounds strange, I know)
  • Learning Mode: Secure Permanent
  • Action on Violation: Discard (if you want to drop packages of other hosts) or Shutdown (if you also want the network interface to be shut down)

If you select the setting “Shutdown” the network interface is shut down after the first “unknown” network device is connected. This means manual tasks are necessary to re-enable the network interface but this offers much greater security in case of brute-force attacks (various software might fake MAC addresses).

Afterwards Port Security and MAC filtering is implemented – I highly recommend to check the functionality. Often a checkbox is missed and the lock is not working which might result in a major security vulnerability.

Download PDF

Minimal CRUX-ARM installation

Download PDF

CRUX-ARM is a Linux distro for ARM devices like the Raspberry Pi that follows the KISS philosophy. If you’re looking for a minimalistic system you might want to have a look at the construction kit.

I’m using CRUX-ARM on one of my Raspberry Pi’s. The installation fits on a 1 GB memory card. On the project’s wiki there is a manual for installing the system: [click me!]

The basic installation consists of some unneeded packages that can be removed:

# prt-get listinst|less
# prt-get remove reiserfsprogs xfsprogs jfsutils pciutils btrfs-progs hdparm sudo ppp exim mlocate
# prt-get remove mlocate
# groupdel mlocate

To make SSH work you need to customize the file /etc/hosts.allow and the daemon autostart configuration:

# echo "sshd: ALL" >> /etc/hosts.allow
 # vi /etc/rc.conf
 ...
 SERVICES=(net crond sshd)

ESC ZZ

It is also a good idea to alter the paging behaviour and the daily rdate cron job:

# echo "vm.swappiness=0" >> /etc/sysctl.conf
# vi /etc/cron/daily/rdate
...
/usr/bin/rdate -nav pool.ntp.org

ESC ZZ

Because this installation doesn’t need any graphical software you can disable the Xorg repositories:

# mv /etc/ports/xorg-arm.rsync /etc/ports/xorg-arm.rsync.inactive
# mv /etc/ports/xorg.rsync /etc/ports/xorg.rsync.inactive
# vi /etc/prt-get.conf
...
#prtdir /usr/ports/xorg-arm
#prtdir /usr/ports/xorg

ESC ZZ

# rm -Rf /usr/ports/xorg*

In my case the firmware only detected 128 MB memory – the reason for this effect is an outdated firmware version. Using a tool named rpi-update this firmware can be updated easily. During the update also a new kernel is installed – after rebooting the system old kernel modules can be removed. The update also installs some tools for maintaining the single board computer – to use these tools you need to update the library cache:

# wget --no-check-certificate http://goo.gl/1BOfJ -O /opt/bin/rpi-update 
# chmod +x /opt/bin/rpi-update
# rpi-update
Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS
Performing self-update
ARM/GPU split is now defined in /boot/config.txt using the gpu_mem option!
We're running for the first time
Setting up firmware (this will take a few minutes)
Using HardFP libraries
If no errors appeared, your firmware was successfully setup
A reboot is needed to activate the new firmware
# reboot
# rm -Rf /lib/modules/3.6.11
# echo "/opt/vc/lib" > /etc/ld.so.conf.d/vc.conf
# ldconfig

It is a good idea to install all available updates afterwards:

# ports -u ; prt-get diff
# prt-get sysup

Because CRUX-ARM is a source-based Linux distro all updates are compiled which can take a lot of time. “Installing” all updates took a complete day for me.

For impatient users I uploaded all update packages that were available when I wrote this article: [click me!]

# wget --mirror -nH --cut-dirs=100 http://crux.stankowic-development.net/rpi/packages --accept="*.pkg.tar.gz"
# for i in *.pkg.tar.gz ; do pkgadd -u $i; done

There is also an image of my CRUX installation available: [click me!]

Download PDF

Short tip: Install Microsoft fonts under Enterprise Linux

Download PDF

Some applications still require Microsoft fonts under Linux. Some Linux distros dropped those fonts because of license difficulties. Microsoft offers those fonts for free but redistributing them is prohibited by the used license. Debian-based distros offer a package msttcorefonts-installer which downloads and extracts the fonts during the installation.

This requires an internet connection which isn’t always possible – especially in data centers. A possible solution is to manually create a RPM package that includes all fonts. On Sourceforge there is a spec file for this – using this file it is possible to create a software package on a system with internet connection:

# yum install rpm-build rpmdevtools
# wget http://corefonts.sourceforge.net/msttcorefonts-2.5-1.spec
# rpmbuild -bb msttcorefonts-2.5-1.spec

Afterwards you will find a RPM package under rpmbuild/RPMS/noarch that can be installed on other systems.

Download PDF

Sending and receiving mails under Linux usind msmtp and mutt over Microsoft Exchange EWS

Download PDF

Especially in Microsoft-embossed environments it might be needed to send and receiver mails over Exchange. Some graphical mail clients like Evolution are offering support for this – of course this is not a good solution for servers without graphical user interfaces.

DavMail is a Java-based platform-independent software that is able to act as Exchange gateway for the following protocols:

  • POP
  • IMAP
  • SMTP
  • CalDAV
  • CardDAV

The software listens on appropriate network ports and forwards requests over EWS (Exchange Web Services) to the Exchange server.

There are several versions of DavMail:

  • JEE web application (.war)
  • Bundle including graphical user-interface (for Debian-based distros)
  • Java standalone version

For servers you might choose the Java standadlone version because it can be used without a graphical user-interface. The archive is available for 32- and 64-bit systems. The first step is to download and extract the archive.

32-bit:

# wget http://sourceforge.net/projects/davmail/files/davmail/4.5.1/davmail-linux-x86-4.5.1-2303.tgz/download -O davmail-linux-x86-4.5.1-2303.tgz
# tar xfz davmail-linux-x86-4.5.1-2303.tgz
# cd davmail-linux-x86_64-4.5.1-2303

64-bit:

# wget http://sourceforge.net/projects/davmail/files/davmail/4.5.1/davmail-linux-x86_64-4.5.1-2303.tgz/download -O davmail-linux-x86_64-4.5.1-2303.tgz
# tar xfz davmail-linux-x86-4.5.1-2303.tgz
# cd davmail-linux-x86_64-4.5.1-2303

Hint: Before downloading the archive you might want to check if there is a newer version available: [click me!]

The software needs a configuration file including the OWA (Outlook Web Access) or EWS URL and other settings (SSL configuration, logging, etc.). This file is named davmail.properties – on the project website you can find an example: [click me!]

The most importan line defines the EWS/OWA URL:

davmail.url=https://owa.domain.com

The other settings are documented detailed in comments.

DavMail is started using the following command:

# ./davmail.sh davmail.properties

However it is nicer to start the software automatically while boot time. I published init scripts/service configurations For all common init/service control systems (SysV init, upstart, systemd) I published appropriate configurations on GitHub: [click me!]

This templates can be imported and used easily:

sysvinit # wget https://raw.githubusercontent.com/stdevel/davmail-initscript/master/davmail-sysvinit -O /etc/init.d/davmail
sysvinit # chmod +x /etc/init.d/davmail
sysvinit # chkconfig --add davmail
sysvinit # chkconfig davmail on
sysvinit # service davmail start

upstart # wget https://raw.githubusercontent.com/stdevel/davmail-initscript/master/davmail-upstart -O /etc/init/davmail
upstart # initctl start davmail

systemd # wget https://raw.githubusercontent.com/stdevel/davmail-initscript/master/davmail-systemd -O /usr/lib/systemd/system/davmail
systemd # systemctl enable davmail.service
systemd # systemctl start davmail.service

You might need to change paths in the configuration files if you installed DavMail in a different directory than /opt/davmail.

For sending mails msmtp is used in this case. This is a simple tool which relays mails over SMTP on a configured server.

A configuration file is created for msmtp – e.g. for the current user:

$ ~/.msmtprc
defaults
logfile ~/.msmtp.log
account default
host localhost
port 1025
protocol smtp
from max.mustermann@domain.com
auth login
user domain\max.mustermann
password MyPassword

The lines from, user and password need to be altered to match the Exchange configuration. The configuration can also be altered system-wide – to do this you need to write the content above in the file /etc/msmtprc. The protocol path (logfile) needs also be changed then.

Sending mails can be tested like this:

$ echo -e "Subject: Test\r\n\r\nThis is a test mail" | msmtp bernd.beispiel@domain.com

If problems occur msmtp can be executed with the parameter -v to enable debugging.

For receiving mails using mutt the appropriate configuration (~/.muttrc) is altered:

set spoolfile="imap://max.mustermann:MyPassword@127.0.0.1:1143/INBOX"
set folder="imap://max.mustermann:MyPassword@127.0.0.1:1143"
set from="max.mustermann@domain.com"
set realname="max.mustermann"
set imap_user="max.mustermann@domain.com"

set imap_pass="MyPassword"
set header_cache=~/.mutt/cache/headers
set message_cachedir=~/.mutt/cache/bodies

set sendmail="/usr/bin/msmtp"
my_hdr From: "max.mustermann"

bind index G imap-fetch-mail

Beside IMAP/SMTP configuration a global hotkey G for pulling mails is defined. Don’t forget to change user names and passwords.

Afterwards mutt is able to retreive mails over EWS.

Download PDF

Short tip: Online update hard drive sizes under Linux

Download PDF

When hard drive sizes are altered the Linux kernel isn’t informed about these changes automatically. Rebooting the system is a possible solution – but often no option.

Beneath the directory /sys/class/scsi_disk you will find additional files controlling some of the device’s function depending on the SCSI id. Using the file device/rescan it is possible to schedule re-reading the device information. In this case the kernel will be informed about the new hard drive size. In combination with LVM it is quite easy to serve additional storage:

# echo '1' > /sys/class/scsi_disk/`lsscsi|grep sdb|cut -d" " -f 1|sed -e 's/\[//g;s/\]//g'`/device/rescan
# dmesg
...
ata2: EH complete
sd 3:0:0:0: [sdb] 167772160 512-byte logical blocks: (85.8 GB/80.0 GiB)
sd 3:0:0:0: [sdb] Cache data unavailable
sd 3:0:0:0: [sdb] Assuming drive cache: write through
sdb: detected capacity change from 64424509440 to 85899345920
# pvresize /dev/sdb
  Physical volume "/dev/sdb" changed
  1 physical volume(s) resized / 0 physical volume(s) not resized
...
Download PDF

Short tip: VMware ESXi doesn’t recognize SSD as SSD

Download PDF

During a server installation recently a local attached SSD wasn’t recognized as SSD and therefore it was impossible to use it as vSphere Flash Read Cache. Not always the reason for this issue is a controller misconfiguration – sometimes ESXi just doesn’t recognize the SSD as flash drive. In this case it is possible to flag particular storage devices explicitly as SSD.

For this access to the ESXi console is required. First of all the device name of the affected storage is needed:

#  esxcli storage core device list|grep "naa"
naa.xxx
   Display Name: Local DELL Disk (naa.xxx)
   Devfs Path: /vmfs/devices/disks/naa.xxx

To make sure you picked the right device it’s a good idea to have a look at the vendor and size:

#  esxcli storage core device list -d naa.xxx|egrep -i "vendor|model|size"
   Size: 94848
   Vendor: DELL
   Model: PERC H710P
   Queue Full Sample Size: 0

In this example a circa 100 GB SSD (94848/1024). Using esxcli you can find out whether the device is detected as local attached SSD:

#  esxcli storage core device list -d naa.xxx|egrep -i "local|ssd"
   Display Name: Local DELL Disk (naa.xxx)
   Is Local: true
   Is SSD: false
   Is Local SAS Device: false

Using the following command you can flag the SSD:

esxcli storage nmp satp rule add --satp=VMW_SATP_LOCAL --device naa.xxx --option "enable_local enable_ssd"

If the SSD is not locally attached you need to remove the keyword “enable_local“. Afterwards the device reuls are reloaded:

# esxcli storage core claimrule load
# esxcli storage core claimrule run
# esxcli storage core claiming reclaim -d naa.xxx

The SSD should now be recognized as flash storage:

#  esxcli storage core device list -d naa.xxx|egrep -i "local|ssd"
   Display Name: Local DELL Disk (naa.xxx)
   Is Local: true
   Is SSD: true
   Is Local SAS Device: false

If the SSD isn’t recognized correctly it might be necessary to reboot the ESXi host.

Download PDF