In my case the firmware only detected 128 MB memory – the reason for this effect is an outdated firmware version. Using a tool named rpi-update this firmware can be updated easily. During the update also a new kernel is installed – after rebooting the system old kernel modules can be removed. The update also installs some tools for maintaining the single board computer – to use these tools you need to update the library cache:
# wget --no-check-certificate http://goo.gl/1BOfJ -O /opt/bin/rpi-update
# chmod +x /opt/bin/rpi-update
Raspberry Pi firmware updater by Hexxeh, enhanced by AndrewS
ARM/GPU split is now defined in /boot/config.txt using the gpu_mem option!
We're running for the first time
Setting up firmware (this will take a few minutes)
Using HardFP libraries
If no errors appeared, your firmware was successfully setup
A reboot is needed to activate the new firmware
# rm -Rf /lib/modules/3.6.11
# echo "/opt/vc/lib" > /etc/ld.so.conf.d/vc.conf
It is a good idea to install all available updates afterwards:
# ports -u ; prt-get diff
# prt-get sysup
Because CRUX-ARM is a source-based Linux distro all updates are compiled which can take a lot of time. “Installing” all updates took a complete day for me.
For impatient users I uploaded all update packages that were available when I wrote this article: [click me!]
# wget --mirror -nH --cut-dirs=100 http://crux.stankowic-development.net/rpi/packages --accept="*.pkg.tar.gz"
# for i in *.pkg.tar.gz ; do pkgadd -u $i; done
There is also an image of my CRUX installation available: [click me!]
Some applications still require Microsoft fonts under Linux. Some Linux distros dropped those fonts because of license difficulties. Microsoft offers those fonts for free but redistributing them is prohibited by the used license. Debian-based distros offer a package msttcorefonts-installer which downloads and extracts the fonts during the installation.
This requires an internet connection which isn’t always possible – especially in data centers. A possible solution is to manually create a RPM package that includes all fonts. On Sourceforge there is a spec file for this – using this file it is possible to create a software package on a system with internet connection:
Especially in Microsoft-embossed environments it might be needed to send and receiver mails over Exchange. Some graphical mail clients like Evolution are offering support for this – of course this is not a good solution for servers without graphical user interfaces.
DavMail is a Java-based platform-independent software that is able to act as Exchange gateway for the following protocols:
The software listens on appropriate network ports and forwards requests over EWS (Exchange Web Services) to the Exchange server.
There are several versions of DavMail:
JEE web application (.war)
Bundle including graphical user-interface (for Debian-based distros)
Java standalone version
For servers you might choose the Java standadlone version because it can be used without a graphical user-interface. The archive is available for 32- and 64-bit systems. The first step is to download and extract the archive.
# wget http://sourceforge.net/projects/davmail/files/davmail/4.5.1/davmail-linux-x86-4.5.1-2303.tgz/download -O davmail-linux-x86-4.5.1-2303.tgz
# tar xfz davmail-linux-x86-4.5.1-2303.tgz
# cd davmail-linux-x86_64-4.5.1-2303
# wget http://sourceforge.net/projects/davmail/files/davmail/4.5.1/davmail-linux-x86_64-4.5.1-2303.tgz/download -O davmail-linux-x86_64-4.5.1-2303.tgz
# tar xfz davmail-linux-x86-4.5.1-2303.tgz
# cd davmail-linux-x86_64-4.5.1-2303
Hint: Before downloading the archive you might want to check if there is a newer version available: [click me!]
The software needs a configuration file including the OWA (Outlook Web Access) or EWS URL and other settings (SSL configuration, logging, etc.). This file is named davmail.properties – on the project website you can find an example: [click me!]
The most importan line defines the EWS/OWA URL:
The other settings are documented detailed in comments.
DavMail is started using the following command:
# ./davmail.sh davmail.properties
However it is nicer to start the software automatically while boot time. I published init scripts/service configurations For all common init/service control systems (SysV init, upstart, systemd) I published appropriate configurations on GitHub: [click me!]
You might need to change paths in the configuration files if you installed DavMail in a different directory than /opt/davmail.
For sending mails msmtp is used in this case. This is a simple tool which relays mails over SMTP on a configured server.
A configuration file is created for msmtp – e.g. for the current user:
The lines from, user and password need to be altered to match the Exchange configuration. The configuration can also be altered system-wide – to do this you need to write the content above in the file /etc/msmtprc. The protocol path (logfile) needs also be changed then.
Sending mails can be tested like this:
$ echo -e "Subject: Test\r\n\r\nThis is a test mail" | msmtp firstname.lastname@example.org
If problems occur msmtp can be executed with the parameter -v to enable debugging.
For receiving mails using mutt the appropriate configuration (~/.muttrc) is altered:
my_hdr From: "max.mustermann"
bind index G imap-fetch-mail
Beside IMAP/SMTP configuration a global hotkey G for pulling mails is defined. Don’t forget to change user names and passwords.
Afterwards mutt is able to retreive mails over EWS.
When hard drive sizes are altered the Linux kernel isn’t informed about these changes automatically. Rebooting the system is a possible solution – but often no option.
Beneath the directory /sys/class/scsi_disk you will find additional files controlling some of the device’s function depending on the SCSI id. Using the file device/rescan it is possible to schedule re-reading the device information. In this case the kernel will be informed about the new hard drive size. In combination with LVM it is quite easy to serve additional storage:
During a server installation recently a local attached SSD wasn’t recognized as SSD and therefore it was impossible to use it as vSphere Flash Read Cache. Not always the reason for this issue is a controller misconfiguration – sometimes ESXi just doesn’t recognize the SSD as flash drive. In this case it is possible to flag particular storage devices explicitly as SSD.
For this access to the ESXi console is required. First of all the device name of the affected storage is needed:
# esxcli storage core device list|grep "naa"
Display Name: Local DELL Disk (naa.xxx)
Devfs Path: /vmfs/devices/disks/naa.xxx
To make sure you picked the right device it’s a good idea to have a look at the vendor and size:
# esxcli storage core device list -d naa.xxx|egrep -i "vendor|model|size"
Model: PERC H710P
Queue Full Sample Size: 0
In this example a circa 100 GB SSD (94848/1024). Using esxcli you can find out whether the device is detected as local attached SSD:
# esxcli storage core device list -d naa.xxx|egrep -i "local|ssd"
Display Name: Local DELL Disk (naa.xxx)
Is Local: true
Is SSD: false
Is Local SAS Device: false
GitHub is a very comfortable portal to collaborate source coces. The service manages versions using Git – for documentation purposes bugs and wiki contents can be provided. Especially the open-source scene uses the service a lot but for internal, non-public developments it is only partially suitable.
Premium users are able to create private repositories. For those access rules can be created – the files are still stored on the provider’s servers. Another option is GitHub Enterprise – the appliance with costs offers all services known from GitHub in the local intranet.
For private purposes you might prefer a free solution – e.g. GitBucket. The Java software looks like GitHub and also offers the same core functionalities. Some of them are:
public and private Repositorys
repository browser and file editor
wiki and bug tracker
fork / pull requests
Compared to GitHub some features are currently (release 2.4.1, 10/06/2014) missing:
watch/star function (bookmark and follow)
comments for changesets
GitBucket requires Tomcat 7.x and can be downloaded as WAR archive. To deploy the application it is sufficient to copy the file into the appropriate directory:
I found out that it is currently (11/01/2014) impossible to run osad (Open Source Architecture Daemon) with SELinux enabled on EL6.6 and EL7 systems. The following error message can be seen while starting the service:
# service osad restart
Shutting down osad: [ OK ]
Starting osad: 2014-11-01 12:23:57 osad._setup_config: Updating configuration
2014-11-01 12:23:57 osad._setup_config: Time drift 0
2014-11-01 12:23:57 osad._setup_config: Client name ...
2014-11-01 12:23:57 osad._setup_config: Shared key ...
2014-11-01 12:23:57 jabber_lib.setup_connection: Connecting to spacewalk.localdomain.loc
2014-11-01 12:23:57 jabber_lib._get_jabber_client:
2014-11-01 12:23:57 jabber_lib._get_jabber_client: Connecting to spacewalk.localdomain.loc
2014-11-01 12:23:57 jabber_lib.__init__:
2014-11-01 12:23:57 jabber_lib.__init__:
2014-11-01 12:23:57 jabber_lib.connect:
Error connecting to jabber server: Unable to connect to the host and port specified
2014-11-01 12:23:57 jabber_lib.main: Unable to connect to jabber servers, sleeping 60 seconds
2014-11-01 12:23:57 jabber_lib.push_to_background: Pushing process into background
After I spent much time on analyzing the Spacewalk and jabber server I remembered that my colleague Johannes had the same issue the other day. Red Hat Support named the following workaround:
# semanage permissive -a osad_t
# service osad restart
2014-11-01 12:59:49 jabber_lib.setup_connection: Connected to jabber server spacewalk.localdomain.loc
2014-11-01 12:59:49 jabber_lib.push_to_background: Pushing process into background
It seems like there is currently an error in the SELinux configuration of osad – this bug prohibits the communication with the Jabber service of Spacewalk, Red Hat Satellite or SUSE Manager. The workaround sets the SELinux domain osad_t into permissive mode – this means that rule violations are documented but not oppressed. Red Hat is working on a fix.
Attentive readers of my blog or Twitter feed might have seen that I’m also spending my time on OS X relevant topics for two months. The reason for this is that I switched the operating system of my notebook from Microsoft Windows to Mac OS X.
To be honest there was no crucial reason for switching. Since my childhood I was using Microsoft Windows (since version 3.1) – and after a short trip with Linux desktops I returned to Microsoft Windows again. I was interested in a radical change after being frustrated about my last tests with Linux on the desktop. Because I have made first positive experiences with OS X on a Mac Mini I decided to replace my Thinkpad with a MacBook. Changing the hardware would have been necessary anyway as the T420s wasn’t sufficient for my needs anymore.
After I had a look at the MacBook Air I finally decided to buy a MacBook Pro Retina 13,3. The main reason for the MacBook Air would have been the attractive price beginning at 900 euro – unfortunately the low memory of 4 GB and the low-density display are quite unalluring. Luckily I found an online offer of a MacBook Pro with Retina display and a much better hardware configuration. The extra charge was quite small and worth it. My device is equipped with:
13,3 inch Retina display with 2560×1600 pixel resolution
For using at home I also bought a Elgato Thunderbolt docking station which is a good expansion to the Rain mStand. The dock connects the MacBook with my gigabit network, keyboard, mouse and a DELL U2713H screen. This screen replaced my two Samsung SyncMaster 2433LW screens. After working for two weeks exclusively on the Retina display I was shocked when I turned on the 5 years old Samsung screens again – I really wanted to have something with a adequate resolution. Because the new screen’s resolution matches the Retina display I don’t need a second screen anymore. As a result I even have more space on my desk. If I need more temporary screen space I can add the MacBook screen to expand the desktop.
The hardest thing while changing the operating system is selecting the software. Especially when you were using one particular platform for couple of years it might be hard to find adequate alternatives. Fortunately a part of my software (Firefox, Opera, Citrix, Skype, VLC,…) is platform-independent so that I didn’t need to search alternatives. I found alternatives for the remaining software:
Microsoft Office 2010 => Microsoft Office for Mac 2011 (*)
VMware Workstation => VMware Fusion (*)
WinRAR => UnrarX
Texmaker => TeXnicle
Camtasia => Camtasia 2 for Mac (*)
KeePass => KyPass Companion (*)
OpenVPN => Viscocity (*)
Photoshop => GIMP
Filezilla => Cyberduck
(*) = with costs
For some applications it was necessary to buy additional licenses. Beyond that I found the following applications and tools very useful:
Airmail (with costs) – in my opinion the best mail client for OS X
Android File Transfer – OS X doesn’t support MTP, using the tool it is possible to exchange files between smartphone and computer
I don’t need dedicated software for some use-cases anymore. I really missed the CalDAV/CardDav support in Microsoft Office – there were many third-party applications to fill this gap but I wasn’t successful with those. As a result I managed my contacts and calendars only on my smartphone and tablet. OS X offers a great native integration for this use-case – I’m able to use my Baikal database furthermore.
Eingedocktes MacBook Pro
After about 2 months I became familiar with OS X. Problems mostly occur only while switching between the particular operating systems – e.g. when I’m using my business computer running Microsoft Windows after the weekend.
I’m very happy with the MacBook Pro’s hardware – especially the Retina display. Unlike my apprehension the slightly glossy display isn’t annoying at all. At this point I’d like to mention that „glossy“ is not equal to „glossy“. When I hear “glossy screens” I’m thinking about the panels that are built into low-budget notebooks. The Retiny display offers a much greater screen – the panel is slightly glossy but outdoor usage is possible anyway. The battery life is much greater in comparsion with my Thinkpad – I think the age is not the only reason for this. Another thing I really like is the nearly silent cooling – my T420s was often annoying me. As a Thinkpad fan the legendary keyboard was always my first choice – but I have to admit that the Apple keyboard is at least comparable. The key-stroke is comfortable and tipping on it is quite nice. Illuminated keyboards are open to dispute – for me the backlight is a adequate replacement for the Thinklight. A Thinkpad feature I’m really missing is the docking port. Indeed the Thunderbolt dock is also a docking station but I need to connect two cables: the power and the Thunderbord cable. For the Thinkpad is was sufficient to click the notebook into place. You also need to be willing to compromise concerning the expandability when switching from a conventional notebook. All my previous notebook were equipped with a SSD (for the operating system and applications) and a conventional hard drive for „unimportant“ files (music library, virtual machines, etc.). This option is not available for the MacBook which means that you need to balance between a SSD or a hard drive. As a result I bought a MacBook with a adequate SSD and moved rarely used data to my NAS.
For me OS X is a good compromise between a „simply working desktop“ and an unixoid operating system which offers a complete set of Unix software for power users. Linux would have also been interesting for me but unfortunately my recent tests showed me that Linux can’t be a serious option for me for the next years.
Last week the european VMworld event was taking place in Gran Via, Barcelona. The program of the 4-day conference was characterised by virtualization and especially VMware products. VMware hosts two events every year – in the USA and Europe.
I had the honour to join this event as press member which gave me the possibility to gather many impressions. For me it was the first VMworld at all – but the definitely not the last one.
The program is very extensive – visitors may choose between:
more than 400 talks (partially hands-on) around VMware technologies
Exhibitor exchange with at about 100 vendors
Certification exams at lower prices
the possibility to talk to VMware experts
The agenda was disversified – the only problem was to schedule everything. As a little helper VMware published a smartphone application for Android and iOS devices for scheduling talks. Beside this the app is also capable of browsing the exhibition vendor list and sharing messages and photos using Twitter.
The numerous chill-out lounges invited for establishing contacts.
The keynote was introduced by a cool dance /light act (starting at 1:02):
The agenda was characterised by new products around cloud technology and Software Defined Data Center – some topics were:
The presentation was moderated by well-known faces from the VMware management:
Maurizio Carli (EMEA Senior Vice President and General Manager)
Pat Gelsinger (CEO)
Bill Fathers (Vice President and General Manager Hybrid Cloud)
Sanjay Poonen (Vice President and General Manager End-User-Computing)
Kit Colbert (CTO End-User-Computing)
vCloud Air is a cloud service that was previously called vCloud Hybrid Service. This service uses vSphere technology and integrates seamlessly in pre-existing VMware customer infrastructure. The service’s prime benefit is to implement dedicated and redundant cloud ressources considering the compliance of defined security and availability rules. Customers are able to use vCloud Air ressources as desaster recovery infrastructure which is mostly cheaper than additional hardware.
VMware names that vCloud Air offers doubled computing power than Microsoft Azure and tripled storage performance than Amazon Web Services. In comparsion with those competitor products the VMware alternative is predicted to be even cheaper. More details about the comparsion can be found in a VMware blog post: [click me!]
Currently more than 5000 applications and 90 operating systems are supported for vCloud Air.
As a highlight a new vCloud Air data center for central europe in Germany was announced. There was a huge demand for such a data center. According to VMware a data center in Germany is a good choice because of strong security and privacy policies. The data center is predicted to be ready-to-use beginning quarter 1 2015.
VMware also announced vRealize Suite – a suite for managing hybrid clouds. It is designed to mange:
VMware vSphere, Virtual SAN, NSX
Amazon Web Services
other hypervisors, e.g. Hyper-V, XEN and KVM
Automation and operations management are core functionalities of the software. An important product function is a self-service-portal. Using operations management you’re able to manage:
Depending on the use-case there are two editions: advanced and enterprise.
More product information can be found on the VMware website: [click me!]
Horizon FLEX is a interesting solution for BYOD scenarios. The product’s use-case is implementing centrally managed virtual machines for remote users.
VMs can be created and managed centrally. Administrators are providing customized VM templates that can be seen by end-users using the Horizon FLEX Client (which is available for Microsoft Windows and Mac OS X). End-users can run these VMs using VMware Player Pro and VMware Fusion Pro.
VM’s rules are implemented centrally using the Horizon FLEX Policy Server. Some of the management tasks are patch and backup maintenance. It is possible to harden VMs – e.g. it’s possible to ensure remote locks and define expiration dates. VMs provided by Horizon FLEX can also be used without a network connection. Using this product it is possible to ensure that even remote users are following company security rules for virtual machines.
Airwatch is a Enterprise mobile device management suite that is able to manage devices, applications, workspaces and data contents. Unlike other alternative products Airwatch manages smartphones and tablets as well as conventional laptops and desktops.
Currently the following operating systems are supported:
In my opinion the most interesting update was EVO:RAIL – an appliance that is offered by VMware and certified partners. EVO:RAIL is a infrastructure concept (HCIA, Hyper-Converged Infrastructure) that works great for medium-sized businesses and smaller private clouds.
But why? Because it offers 4 independent physical servers, Virtual SAN storage and remarkable CPU and memory ressources in a small form-factor (2U rack case). Each servers consists at least of:
3x SAS hard drives with 1,2 TB storage capacity (10K RPM)
400 GB enterprise SSD (MLC) for read/write caching (VSAN)
2x 10 Gbit network adapter (SFP+ or RJ45)
IPMI remote management
In summary one appliance offers overall 100 Ghz CPU power, 768 GB memory and 16 TB VSAN storage capacity. Currently it is possible to easily combine up to 4 appliances as “scale-out setup” to have even more ressources.
Beside the hardware the following software products are also part of the appliance:
vSphere Enterprise Plus
vCenter Log Insight
This collection is rounded by a new developed management interface called MARVIN. The core components are:
No Adobe Flash technology is needed anymore – all recent web-browsers are supported. As a highlight MARVIN configures the whole infrastructure automatically. To install an EVO:RAIL appliance it is sufficient to start a configuration wizard and enter IP addresses and host names of the particular servers. The remaining configuration (ESXi/cluster configuration, VSAN initialization,…) is done in the background. After at about 20 minutes the appliance is ready and can run the first virtual machine.
Customers don’t need to buy dedicated software licenses or hardware support – the appliance can be bought including all the required subscriptions and support from one of the certified EVO:RAIL partners. This makes it much easier to keep maintenance contracts in mind. Currently customers can choose between the following vendors based on their preferences:
Hitachi Data Systems
Because the hardware design is quite the same the customer’s vendor selection is irrelevant – the product is the same. By the way, EVO stands for Evolutionary and RAIL names the Rail mount form-factor. At VMworld another concept named EVO:RACK was presented. In comparsion with EVO:RAIL this concept covers multiple racks and offers greatly more ressources – which makes it suitable for implementing bigger data centers or clouds. On top the following software products are also included:
vCloud Suite (vRealize)
It was said that a complete EVO:RACK data center is ready after at about 2 hours.
I really like the EVO concept. It is limited to the essential components of a virtual infrastructure, it’s very scalable and offers a great level of automation. In my YouTube clip you can find some detail shots of the appliance: [click me!]
A product many VMware customers are waiting for is vSphere 6. The next major release of the popular hypervisor offers many interesting updates.
For adventurous administrators there are beta versionen that can be downloaded after registration: [click me!]
The beta is protected by a non-disclosure agreement which means that I’m not allowed to list the particular product news. If you’re interested in the details you really should register and give the beta a try. After the registration you’re allowed to access appropriate boards that discussing the product..
Because of the non-disclosure agreement there were no new details for me at VMworld. I also had the possibility to talk to some VMware employees – but of course they also weren’t allowed to name further details.
So we need to keep calm and wait until vSphere 6 is released.
I really enjoyed the interesting event. The numerous talks and product demos included many new things I can use profitably. The location in Barcelona is just great – I’m really excited for the next event. It was a honour for me to join the event as a press member.
I highly recommend this event to every administrator that is using (or thinking about to use) VMware products!
Video and photos
I published a short review video about the event on YouTube: [click me!]
Some photos of the event:
Gran Via, Nordeingang
Gran Via, Nordeingang
Gran Via, Nordeingang
Gran Via, Halle 8
Pat Gelsinger, CEO
Bill Fathers (Vice President, General Manager Hybrid Cloud)
Sanjay Poonen (Vice President, General Manager End-User Computing)
When running a virtualized vCenter server it is very important to have a working backup. If the vCenter server crashes the virtual landscape cannot be managed or monitored.
Configuring backup is very easy when running VMware vCenter Server on a conventional Microsoft Windows servern because it is a fully-features operating system. In case you’re using the VMware vCenter Server Appliance configuring backup might be more complex. By default this system doesn’t come with pre-installed backup agents because it is assumed that a “agentless” backup solution for virtual machines is used.
Not every company uses backup solutions that are offering “agentless” backups. If this function is missing the backup is missing as well – which is very unfavorable for productive environments. For example – if you’re using EMC NetWorker in an older version agentless backups aren’t possible.
The vCSA is based on SUSE Linux Enterprise Server 11 SP2 which means that you can extend its software using RPM packages – if your backup software comes also in a adequate format. Unlike many other virtual appliances I saw recently it is still possible to gain root access to the system. So it is basically possible to installed additional needed software – of course this is not covered by the VMware support.
Installation of EMC NetWorker
EMC generates generic NetWorker RPMs for RPM-based Linux distros. Amongst others this software package is compatible with Red Hat Enterprise Linux and SUSE Linux Enterprise Server – which means that it is suitable for installation inside the vCSA.
It is necessary to resolve some dependencies for EMC NetWorker. The following software packages need to be installed (might differ depending on your NetWorker version):
And where do you get these packages from? Bascially you’ll need a valid subscription for downloading SLES packages. SUSE also offers trial versions for their own enterprise products – including SLES: [klick mich!]
Owners of vSphere Standard or a higher edition also had the possibility to get product patches and updates (ended 07/25/2014, see here) at no fee in the “SUSE Linux Enterprise Server for VMware” programme. To use this it was needed to register the ESXi serial number at SUSE. Afterwards it was possible to activate installations by using activation codes: [click me!]
After a short registration it is possible to download a test version including 60 days free update support. Basically only the first DVD is needed – underneath the folder “suse/x86_64” the required RPM packages can be found. It recommended to download the SLES release SP2 the vCSA is based on instead of SP3.
For installing the EMC NetWorker agent the packages lgtoman and lgtoclient are sufficient. The RPM packages extracted from the DVD are copied to the vCSA using SSH/SCP and installed together with the agent:
Before NetWorker is started for the first time it is important to create the required folder structure and a list of valid backup servers. If NetWorker is installed before those information were served the agent isn’t working properly and might be reinstalled (because the erroneous information are cached).
# mkdir -p /nsr/res
# echo "backupserver.fqdn.loc" > /nsr/res/servers
# chmod -R 755 /nsr/*
# chkconfig rpcbind on
# chkconfig networker on
# service networker start
starting NetWorker daemons:
The services rpcbind and networker need to be active to make sure that backups can be created:
# service rpcbind status
Checking for service rpcbind running
# service networker status
+--o nsrexecd (PID)
In many companies it is very common to create offline backups weekly, e.g. at the weekend. Machines are often backed up “online” daily which means that application services aren’t stopped. As a result not all data can be copied consistently because some files (e.g. database files) are in use. These files are backed up during the offline backup.
For controlling offline backups using EMC NetWorker shell scripts are created. Those scripts are executed before and after runing the backup job.
For creating the backup scripts I was guided by the following VMware KB articles:
# vi /opt/start_offline_backup.sh
service vmware-vpxd stop
service vmware-inventoryservice stop
/opt/vmware/vpostgres/1.0/bin/pg_dump INSTANCE -U USER -Fp -c > /tmp/VCDBackUp
/usr/lib/vmware-vpx/inventoryservice/scripts/backup.sh -file /tmp/InventoryServiceDB.DB
# vi /opt/stop_offline_backup.sh
service vmware-vpxd start
service vmware-inventoryservice start
# chmod +x /opt/st*_offline_backup.sh
The variables INSTANCE and USER need to be customized – the appropriate values can be gathered from the configuration file /etc/vmware-vpx/embedded_db.cfg. It is also necessary to check whether the file system /tmp offers enough storage capacity for holding the backups.
Finally I’d like to mention that this procedure is working pretty fine but it is not covered by the VMware support. It is a good idea to revert those changes before opening a support case (or installing appliance updates!).