Language Transfer for learning a new language

Posted: 2022-04-13 12:21:46 by Alasdair Keyes

Direct Link | RSS feed


I've recently tried to get back into learning German and I have started using the Language Transfer website.

Language Transfer is run by one chap, Mihalis who teaches a range of languages, French, Italian, German, Spanish, Greek, Turkish, Arabic and even Swahili to English speakers. His concept for learning languages is to understand what shared parts of English are similar (or transferred) across into the language you are learning and use that as a base to get a fast grounding.

I've been using Duolingo on and off for a while, but often become frustrated in it's lack of explanation as to why certain aspects of the language are the way they are. Language Transfer really adds to it by teaching common rules as to how to construct sentences in the given language, how verbs congugate, nouns pluralise, etc.

If you're learning any languages, I highly recommend checking the site to see if he is teaching your language and I think you'll find it a great help.

https://www.languagetransfer.org/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Mounting an encrypted ZFS dataset at boot on Debian 11 Bullseye

Posted: 2021-11-09 16:43:36 by Alasdair Keyes

Direct Link | RSS feed


I've recently got myself another HP Microserver which has space for 4 disks so I decided setup Debian 11 on one disk and use the other three to create a ZFS zpool for data storage.

The last time I'd experimented with ZFS on Linux (ZoL) on a virtual machine, encryption wasn't available, but it is now so I enabled if for my dataset. This is fine when the dataset is created, as it will auto-mount, but it doesn't auto-mount on reboot as it's encrypted.

It turns out ZFS handles the process of obtaining the encryption key and mounting the volume as two distinct processes. This means that when the ZFS mount service starts, it will skip mounting the encrypted volume because there is no key available to it.

The Linux standard dm-crypt/LUKS encryption requires you to update /etc/crypttab with each encrypted volume on the system and it will prompt for a password at boot time. ZFS does have the ability to use a file as the encryption key, but as I already have to enter a password for the OS drive, I was looking for do the same for the ZFS dataset.

After some investigation I found the solution on the Arch Linux Wiki (https://wiki.archlinux.org/title/ZFS#Native_encryption). They provide a snippet for a systemd service file that can be set to run before the ZFS mount service to ask for the encryption keys.

It did require tweaking as the path to the ZFS binary is different on Debian. In short, create the file /etc/systemd/system/zfs-load-key.service with the following content...

[Unit]
Description=Load ZFS encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Once that is done run the following commands to refresh systemd with the new service and then set it to run on boot.

systemctl daemon-reload
systemctl enable zfs-load-key.service


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

LXC Debian containers and unknown GPG signing keys

Posted: 2021-06-04 10:14:27 by Alasdair Keyes

Direct Link | RSS feed


I needed to create a Debian Buster LXC container on my laptop and when running the following LXC create command I received the following error

# lxc-create -t debian -n testcontainer -- -r buster
debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ...
gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys
gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
Downloading debian minimal ...
I: Retrieving InRelease
I: Checking Release signature
E: Release signed by unknown key (key id DCC9EFBF77E11517)
   The specified keyring /var/cache/lxc/debian/archive-key.gpg may be incorrect or out of date.
   You can find the latest Debian release key at https://ftp-master.debian.org/keys.html
Failed to download the rootfs, aborting.
Failed to download 'debian base'
failed to install debian
lxc-create: testcontainer: lxccontainer.c: create_run_template: 1626 Failed to create container from template
lxc-create: testcontainer: tools/lxc_create.c: main: 319 Failed to create container testcontainer

This is telling me that the key used to sign the Debian release is unknown to LXC. It also shows that LXC is using the file /var/cache/lxc/debian/archive-key.gpg as the GPG keyring.

We can check the keys listed in that keyring with the following command. As a break down, this is running the regular gpg utility, but the --no-default-keyring and --keyring arguments are telling gpg to manage just the keyring file that LXC is using.

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key
/var/cache/lxc/debian/archive-key.gpg
-------------------------------------
pub   rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
      126C0D24BD8A2942CC7DF8AC7638D0442B90D010
uid           [ unknown] Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>

Which shows it only has the key for Debian 8 - Jessie...

To get the latest version we need to check that the key listed in the error is a valid Debian key, otherwise we could be opening ourselves up to downloading malicious files.

Visiting https://ftp-master.debian.org/keys.html shows that the GPG key with fingerprint DCC9EFBF77E11517 listed in the error is the valid Debian 10 Buster release key.

Now that we're satisfied that nothing shady is going on, we can import the key to the keyring.

Download the key from the Debian site...

# wget "https://ftp-master.debian.org/keys/release-10.asc"
...
2021-06-04 10:51:53 (35.6 MB/s) - ‘release-10.asc’ saved [1200/1200]

Then import into the keyring...

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --import release-10.asc 
gpg: key DCC9EFBF77E11517: public key "Debian Stable Release Key (10/buster) <debian-release@lists.debian.org>" imported
gpg: Total number processed: 1
gpg:               imported: 1

Running the --list-key command we ran before shows the new key in the the LXC keyring

# gpg --no-default-keyring --keyring /var/cache/lxc/debian/archive-key.gpg --list-key
/var/cache/lxc/debian/archive-key.gpg
-------------------------------------
pub   rsa4096 2014-11-21 [SC] [expires: 2022-11-19]
      126C0D24BD8A2942CC7DF8AC7638D0442B90D010
uid           [ unknown] Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>

pub   rsa4096 2019-02-05 [SC] [expires: 2027-02-03]
      6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517
uid           [ unknown] Debian Stable Release Key (10/buster) <debian-release@lists.debian.org>

We can now run the create container command...

# lxc-create -t debian -n akeyescouk -- -r buster
debootstrap is /usr/sbin/debootstrap
Checking cache download in /var/cache/lxc/debian/rootfs-buster-amd64 ... 
gpg: key 7638D0442B90D010: 4 signatures not checked due to missing keys
gpg: key 7638D0442B90D010: "Debian Archive Automatic Signing Key (8/jessie) <ftpmaster@debian.org>" not changed
gpg: Total number processed: 1
gpg:              unchanged: 1
Downloading debian minimal ...
I: Retrieving InRelease 
I: Checking Release signature
I: Valid Release signature (key id 6D33866EDD8FFA41C0143AEDDCC9EFBF77E11517)
I: Retrieving Packages 
I: Validating Packages 
...


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

PHPUnit 9.x coverage reporting

Posted: 2020-11-24 13:25:46 by Alasdair Keyes

Direct Link | RSS feed


PHPUnit 9.x coverage reporting

I started a new Laravel project today and used the latest Laravel 8.x release. After installation I go through and update a few things such as adding in phpmd, phpcs, laravel-debugbar and also setup PHPUnit code coverage reports that I can hook into gitlab's code coverage reporting tools.

After making the changes to my phpunit.xml file I was greeted with the following error

PHPUnit 9.4.3 by Sebastian Bergmann and contributors.

  Warning - The configuration file did not pass validation!
  The following problems have been detected:

  Line 29:
  - Element 'log': This element is not expected.

  Test results may not be as expected.


..                                                                  2 / 2 (100%)

Time: 00:00.386, Memory: 30.00 MB

OK (2 tests, 2 assertions)

Line 29 is part of the <logging> block I added in for coverage reporting.

<phpunit ....>
    <logging>
        <log type="coverage-text" target="php://stdout" showUncoveredFiles="true"/>
        <log type="coverage-html" target="build/logs/html/" showUncoveredFiles="true"/>
    </logging>
</phpunit>

After reading through the documentation for PHPUnit 9 (which is what is pulled in with Composer for Laravel 8) this is changed from logging to report and is now under the testsuites tag and has an changed syntax.

<phpunit ....>
    <testsuites processUncoveredFiles="true">
        ...
        <report>
            <text outputFile="php://stdout"/ showUncoveredFiles="true">
            <html outputDirectory="build/logs/html/"/>
        </report>
    <testsuites>
</phpunit>

I'm probably not going to be the only one caught out by this, so I thought it warranted a post.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

PHP Docker Image and opcache

Posted: 2020-07-25 12:19:25 by Alasdair Keyes

Direct Link | RSS feed


I've recently started working on a new project using NGINX, PHP 7.4, Redis, PostgreSQL and Laravel 7.

As it's a new project I thought I would Dockerise it from the start. After configuring my docker-compose.yml file I built the environment and installed Laravel7 with the Laravel Debugbar (https://packagist.org/packages/barryvdh/laravel-debugbar)

I noticed that the bootstrapping of the basic Laravel App was taking over 100ms. I ran the config cache config:cache and it barely made any difference.

This didn't seem right to me but I had a number of variables and I was unsure of where to start looking... or if it was a problem at all

Thankfully the first part of my investigation found me a solution. I created a phpinfo() page on an existing Laravel 5 setup and on the Docker container. It turns out that the opcache isn't enabled on the Docker image by default.

Adding the following RUN statement to my DockerFile sorted the issue

docker-php-ext-install opcache

After restarting the container, the app bootstraps in 25-30ms. I'm unsure why opcache isn't enabled by default, I can't think of any problems it would cause and I would imagine in over 99% of situations users would want it on... no one wants slow PHP.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

The Freelance Developer Podcast

Posted: 2020-03-24 23:35:20 by Alasdair Keyes

Direct Link | RSS feed


I happened to chance across a post on LinkedIn from an old colleague of mine who has started a new podcast about freelance development. If you've got some time and you're either a contractor or looking to contract in the future, it's worth a listen.

https://www.thefreelancedeveloper.co.uk/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Windows 7 EOL

Posted: 2020-01-19 10:17:03 by Alasdair Keyes

Direct Link | RSS feed


NOTE: Running End of Life software is risky, don't do it unless you accept the risks

So Windows 7 is now EOL for all but the few customers who are paying through the nose for long term support. I run Linux on most of my machines, but I still do have a solitary Windows machine for Steam and a few other Windows only apps that won't run on WINE.

Unfortunately, I dislike Windows 8 an 10, there are a number of reasons, but on a purely practical level I find the interface horrendous, un-intuitive and difficult to use. I would like to continue running Windows 7 for as long as I can. I will have to accept the increased security risks from running an OS with no further security updates but thankfully my use of Windows is very limited and doesn't involve browsing/email or other common attack vectors for viruses and trojans. With a good AV, installed too, this should reduce risk to acceptable levels.

With the EOL status, the Windows Update service for Windows 7 will no doubt end in time, this means that although my current machine is up-to-date, if I need to re-intstall due to hardware failure, I may not have access to all the updates.

With this in mind, I found the WSUSOffline tool http://www.wsusoffline.net/, which allows you to download all updates for a specific Windows/Office version and store them offline. The main use-case appears to be for sys-admins with network access restrictions to download and install updates on air-gapped machines, however in this instance it looks well suited to archiving. There are other options to me such as installing and maintaining a Windows WSUS server, but that is a lot of extra work.

If you wish to get your own backups of updates these are the steps I took

It took about 30 mins to download all the updates, then once it's done it copies a folder structure with all the updates into your shared drive which you can then backup from your host to wherever you want. The folder also includes the executable to kick off the updates on another machine.

Running the archived updates on another machine is not a run-and-forget process, Windows updates require reboots which means you will have to click a few buttons now and again, but that is no different than the Official update process.

It looks like the WSUSOffline tool works by distributing a list of updates to use. As Windows 7 only went EOL in January, I would imagine that I will have to wait for the next WSUSOffline update to get the last few Windows Updates archived but it looks like I should be able to continue using Windows 7 for some time yet, even if I have to rebuild.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Wiki Migration

Posted: 2019-07-30 11:15:33 by Alasdair Keyes

Direct Link | RSS feed


For about 10 years I've used a wiki to document everything that I learn and need to keep track of. This contains everything from walkthroughs of installing/configuring software, to lists of interview questions to ask potential hires.

When I first started working in hosting, I began collecting text files with information given to me by other colleagues. Over time this got un-wieldy so I created a MediaWiki wiki https://www.mediawiki.org/wiki/MediaWiki. I mainly picked this as it was both a wiki I was using at my workplace and it was a common interface; being the software that Wikipedia uses.

Over time I've kept Mediawiki updated but gradually I've had more and more problems with updates breaking and needing fixing so I started looking around for other wiki tools.

New Wiki

I eventually found Dokuwiki https://www.dokuwiki.org/. It's more lightweight and simple but seems to be up to the tasks that I need it for. It uses flat files as a back-end so I don't need to backup both files and a database and after importing all my data it's only 1/4 of the size on disk.

$ du -hs public_html.mediawiki/
203M        public_html.mediawiki/
$ du -hs public_html.dokuwiki
48M         public_html.mediawiki/

I did have to install the tag and pagelist Dokuwiki plugins to allow me to use tags, which are the Dokuwiki version of Mediawiki's categories.

Migration

It would be nice to have been able to copy my articles directly across to the new wiki, but the Mediawiki syntax (https://www.mediawiki.org/wiki/Help:Formatting) and Dokuwiki syntax (https://www.dokuwiki.org/wiki:syntax) are different. The key differences were

I knocked up a quick Perl script to connect to the Mediawiki DB and parse the articles into a format suitable for Dokuwiki. This was mostly done with regex replace statements to insert spaces and change tags etc.

While I was at it, I took this time to delete or update any old articles. So now I have a new wiki with refreshed info and am very pleased with Dokuwiki.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Debian Buster first install

Posted: 2019-07-17 08:47:31 by Alasdair Keyes

Direct Link | RSS feed


I upgraded my home server to Debian 10 (Buster) this week. It's running on quite an old HP Proliant Microserver so I bought a new SSD to use for the OS partitions to give it a little extra life. As such, it was a fresh install rather than an in-place upgrade.

As you would imagine 10 is much the same as 9 in most respects. But there were a couple of points of note...

  1. Puppet install was producing DH key error

The Buster Puppet install was using version 5.5.10 whereas my Puppet Master (On Debian Stretch) was using 4.8.2 when connecting to the master the new install would error with

Warning: SSL_connect returned=1 errno=0 state=error: dh key too small

The answer to this was found at another chap's blog https://blog.steve.fi/upgraded_my_first_host_to_buster.html and is to do with system-wide SSL settings, although I fixed it slightly differently.

In /etc/ssl/openssl.cnf I updated the line

CipherString = DEFAULT@SECLEVEL=2

to

CipherString = DEFAULT@SECLEVEL=1

It turns out this is a non-standard, custom security setting made by Debian https://wiki.debian.org/ContinuousIntegration/TriagingTips/openssl-1.1.1

It doesn't appear that you can define a custom set of Diffie Hellman params for a Puppet Master as you can for other software like NGINX and Apache. As soon as I have my Puppet Master on the later version I'll be changing this setting back, assuming it doesn't interfere with anything else.

  1. check_disk_io Nagios plugin was failing

It turns out the output of the iostat command had changed slightly and required a tweak to continue working. Commit https://gitlab.com/alasdairkeyes/nagios-plugin-check_disk_io/commit/0708ba7b9cb0017f6f36554d54ee3e37a9b58d63

  1. The debsecan package is enabled by default

I wasn't aware this package existed until it started emailing me with all the system vulnerabilities. I can see a use for it, but as my systems are updated regularly, it's now purged by Puppet.

  1. The sensors utility and SMBus PIIX4 adapter device

The sensors utility used by the check_sensors Nagios plugin was erroring that I had a critical alarm.

It turns out that there is no max/critical temp information for the thermometer on this device so the reported temperature is always higher than the threshold of 0C

# sensors
...
jc42-i2c-0-18
Adapter: SMBus PIIX4 adapter port 0 at 0b00
temp1:        +31.0°C  (low  =  +0.0°C)                  ALARM (HIGH, CRIT)
                       (high =  +0.0°C, hyst =  +0.0°C)
                       (crit =  +0.0°C, hyst =  +0.0°C)
... 

As I have other temperature sensors available I disabled this one by creating the following file /etc/sensors.d/jc42-i2c-0-18

chip "jc42-i2c-0-18"
    bus "i2c-0" "SMBus PIIX4 adapter port 0 at 0b00"
    ignore temp1

Other than that it was all pretty seamless.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Tor Project Signing Key Poisoning and Ubuntu's torbrowser-launcher package

Posted: 2019-07-09 12:45:23 by Alasdair Keyes

Direct Link | RSS feed


I started up the Tor browser yesterday and noticed that it didn't start in it's usual time frame, 10 minutes later the browser had still not opened.

Checking top, I saw that a GPG process was using 100% CPU.

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND                                                                                                                                                                                             
19330 username  20   0   78364  47020   4448 R  99.7  0.6   0:16.43 gpg                                                                                                                                                                                                 
 3145 username  20   0 3458164 139712  63512 R  12.6  1.7  18:43.51 cinnamon                                                                                                                                                                                            

I'd read recently about an attack on GPG where keys were being poisoned with a large number of signatures to exploit a GPG bug and corrupt GPG installs https://threatpost.com/pgp-ecosystem-targeted-in-poisoning-attacks/146240/, I wondered if this is what was occuring.

I checked what the GGP process was running.

$ ps aux | grep 19330
username 19330 64.6  0.6  82192 50980 ?        RL   10:51   0:31 /usr/bin/gpg --status-fd 2 --homedir /home/username/.local/share/torbrowser/gnupg_homedir --keyserver hkps://hkps.pool.sks-keyservers.net --keyserver-options ca-cert-file /usr/share/torbrowser-launcher/sks-keyservers.netCA.pem include-revoked no-honor-keyserver-url no-honor-pka-record --refresh-keys

It seemed to be running --refresh-keys which requests updates to keys from the key servers. I ran the following to see what keys were being refreshed.

$ /usr/bin/gpg --homedir /home/username/.local/share/torbrowser/gnupg_homedir --list-keys
/home/username/.local/share/torbrowser/gnupg_homedir/pubring.kbx
----------------------------------------------------------------
pub   rsa4096 2014-12-15 [C] [expires: 2020-08-24]
      EF6E286DDA85EA2A4BA7DE684E2C6E8793298290
uid           [ unknown] Tor Browser Developers (signing key) <torbrowser@torproject.org>
sub   rsa4096 2018-05-26 [S] [expires: 2020-09-12]

I checked the key servers entry for the key EF6E286DDA85EA2A4BA7DE684E2C6E8793298290 at http://pgp.mit.edu/pks/lookup?op=vindex&search=0x4E2C6E8793298290 and saw the key had received a large number of signatures on 2019-06-30, it does indeed look like it has been poisoned with excessive signatures.

I downloaded the latest Tor Browser for Linux directly from https://www.torproject.org/ and didn't receive this issue during startup which is good news.

However, my tor install is through the torbrowser-launcher provided by the Linux Mint repos (originally provided by Ubuntu).

Because the torbrowser-launcher doesn't contain the TOR Browser itself (as the name suggests, it's just a launcher), it is a python environment that will download the latest Tor Browser directly from Tor project. To do this, it uses the Tor Project's public GPG Key to verify the downloaded files are legitimate, during this process it does a refresh from the key servers and hits the poisoning issue.

It seems if you are affected by this, you're best off downloading tor direct from the Tor Project itself. Unfortunately, verification that the file you download from the website requires gpg, you can certainly try and ensure that the key that created the signature is correct...

$ gpg --verify tor-browser-linux64-8.5.3_en-US.tar.xz.asc Downloads/tor-browser-linux64-8.5.3_en-US.tar.xz
gpg: Signature made Fri 21 Jun 2019 02:30:51 PM CEST
gpg:                using RSA key EB774491D9FF06E2
gpg: Can't check signature: No public key

That key EB774491D9FF06E2 matches the key listed at https://2019.www.torproject.org/docs/verifying-signatures.html.en and is a subkey for the Tor Project Signing key, but without the key in your keyring, this check isn't as secure as it should be.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-28fc6e6b4b


Validate HTML 5