Disabling syslog debugging on open-fprintd

Posted: 2023-04-02 19:23:47 by Alasdair Keyes

Direct Link | RSS feed


I recently installed python-validity (https://github.com/uunicorn/python-validity) and open-fprintd (https://github.com/uunicorn/open-fprintd) to get the fingerprint reader working on a Thinkpad T480s. (The standard fprintd package doesn't support the fingerprint reader device on the T480s).

After setting it up I noticed a lot of debug information in syslog (example truncated)...

Apr  2 09:11:00 hostname01 open-fprintd: >tls> 17: 4b00000b0053746757696e64736f7200
Apr  2 09:11:00 hostname01 open-fprintd: >cmd> 1703030050c00a...a82185cc9399d30625ee3c1451f
Apr  2 09:11:00 hostname01 open-fprintd: <cmd< 1703030050b7a4a...cdb0d7f97fa67b6337329
Apr  2 09:11:00 hostname01 open-fprintd: <tls< 17: 0000030002000b00000...6757696e64736f7200

To disable debug without changing the installed systemd files, run the following to create a custom open-fprintd.service file to override it.

sed 's/\(ExecStart.*\)\-\-debug\(.*\)/\1 \2/' /lib/systemd/system/open-fprintd.service > /etc/systemd/system/open-fprintd.service

After a reboot it'll stop spewing that data out.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Group code reviews

Posted: 2023-01-29 16:03:38 by Alasdair Keyes

Direct Link | RSS feed


Whist working with a customer of mine, I was introduced to a way of code review that I hadn't seen before. I'm not sure if it's particularly unique to them, but in my experience it's fairly rare... group code review.

In all previous companies I've worked at/with, a pull request (PR) is created and has some key devs added for Code Review. For a speedy review a link to the PR was often thrown into the Dev Slack channel with a little prodding or begging to get the task done.

Now, most people in dev teams are busy; or at least if they're not too busy, they find almost anything more interesting than reviewing a PR. It can be tedious and if it's touching code that you're not familiar with, you can often end up spending some time trying to work out both what the code was doing and what the change is supposed to be doing. In the end it was often team leaders who ended up picking up the slack and performing the reviews to get code out the door. This puts an excessive burden on them when it could be shared more equally.

The process

This particular customer had group code reviews once in the morning after stand-up and then an optional code review mid afternoon if required. My first thought was that this would cause undue disruption to work flow but over time I saw immense benefit in this approach and very little drawback.

The developer that wrote the code would create a PR as normal. At the review, they would share their screen on a famous git-hosting provider. Usually they would give a sentence about the ticket and what they were trying to achieve. Due to stand-ups and backlog planning, everyone usually had a fairly good idea on this already. The dev would proceed to go through the changes on their branch, giving a run down of the code as they went. Questions or points could be made by anyone and a discussion could be had amongst the group. If new tests were made, they would also be run through briefly so that other devs could see what was being tested, whether it was suitable/adequate and any edge cases that might fall through the cracks and need to be added. (It also required that the tests passed!)

Any changes that were required and agreed upon by the team were added to the PR and the dev would go back and implement the changes.

If the changes needed were minor, they were often approved by a senior dev and scheduled for release. Larger changes would have to have a new PR created and go back into the code review process. The coder would only have to go through the changes from the last code review, not the whole shebang.

The benefits...

Things you think are bad... but they're not as bad as you think...

So that's my review on group code-review. If you're looking for possible improvements to your process I strongly advise you give it a shot. Do it for a month or so and see how you get on. You can always tweak the process based on your needs and fall back to your previous ways if you don't like it.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Adding custom Firefox config with Ansible

Posted: 2023-01-26 21:05:42 by Alasdair Keyes

Direct Link | RSS feed


I've been writing Ansible playbooks to build my laptop and I came across a problem with applying my custom Firefox user.js config file to a new build.

Firefox won't create config/profile directory until it's started for the first time. During this first start a random profile directory is created in which the user.js file needs to be placed e.g /.mozilla/firefox/wxyz1234.default-release/.

As such you can't just install user.js with a basic file copy.

To counter this I wrote the following play so that if there was no Firefox profile folder, Ansible will start Firefox, allow it to create the profile folder and then kill it. It will then search for the newly created profile folder and install the file.

# Install Firefox
- name: Install Firefox
  become: true
  package:
    name:
      - firefox
    state: present

# Check if a profile folder exists
- name: Check Firefox config folder
  find:
    paths: "{{ ansible_user_dir }}/.mozilla/firefox"
    patterns: '^.*\.default-release'
    use_regex: true
    file_type: directory
  register: firefox_config_folder_search

# If profile folder doesn't exist, start Firefox
- name: Starting Firefox
  shell:
    cmd: "firefox &"
  changed_when: false
  when: firefox_config_folder_search.matched == 0

- name: Waiting for Firefox to start
  command: sleep 10s
  changed_when: false
  when: firefox_config_folder_search.matched == 0

# Kill Firefox
- name: Killing Firefox
  shell:
    cmd: kill $(pidof firefox)
  changed_when: false
  when: firefox_config_folder_search.matched == 0

# Search for the newly created profile directory
- name: Check Firefox config folder again
  find:
    paths: "{{ ansible_user_dir }}/.mozilla/firefox"
    patterns: '^.*\.default-release'
    use_regex: true
    file_type: directory
  register: firefox_config_folder_search

# Set a fact with the profile directory path
- name: Set firefox folder name as fact
  set_fact:
    firefox_config_folder_path: "{{ firefox_config_folder_search.files[0].path }}"

# Add in the custom config
- name: Add in Firefox config
  copy:
    src: files/firefox/user.js
    dest: "{{ firefox_config_folder_path }}/user.js"
    owner: "{{ ansible_user_id }}"
    group: "{{ ansible_user_id }}"
    mode: 0644
  when: firefox_config_folder_path != 'false'

The play is idempotent so you can run it as many times as you want and it will continue to give you a nice green response.

I only ever use a single Firefox profile so I don't need to ensure different configs, but it could be extended to take this into account if you needed.

I later found some other software I use has this same config issue so I extracted the process start/kill tasks into a separate file

- name: Starting process {{ item }}
  shell:
    cmd: "{{ item }} &"
  changed_when: false

- name: Waiting for proces {{ item }} to start
  command: sleep 10s
  changed_when: false

- name: Killing process {{ item }}
  shell:
    cmd: kill $(pidof {{ item }})
  changed_when: false

Which can then be called with the following in your playbook

- name: Start/Kill myprogram to generate config
  include_tasks: start-kill-process.yml
  loop:
    - myprogram


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

The RAID

Posted: 2023-01-08 21:15:00 by Alasdair Keyes

Direct Link | RSS feed


After my failed drive on New Year's day, I ordered a new disk and rebuilt the array. Thankfully due to monthly checking by the OS, all the data on the three remaining drives was readable and the array is complete again.

I put the failed disk into another machine and ran the following badblocks command on it.

badblocks -o sdb_badblocks.txt -b 4096 -w -s /dev/sdb

I used the destructive test as the data was not needed now that the array was back to full strength. Incidentally, using a block size of 4096 over the default 1024 seemed to provide about a 2x-3x speed increase.

Even with that, the 2TB disk took just over 33 hours for a full write pass and a confirmation read pass.

At the end of it, a full write and read pass were managed with no errors reported. This is frustrating as mdadm had obviously detected a read error to reject the disk - this was logged in syslog.

I thought that maybe the bad sectors had been remapped by the firmware during the badblocks test, but checking the SMART stats again I saw that no errors are reported and also no re-allocation had been logged (ID# 5 below).

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000b   100   100   016    Pre-fail  Always       -       0
  2 Throughput_Performance  0x0005   134   134   054    Pre-fail  Offline      -       103
  3 Spin_Up_Time            0x0007   168   168   024    Pre-fail  Always       -       342 (Average 311)
  4 Start_Stop_Count        0x0012   100   100   000    Old_age   Always       -       75
  5 Reallocated_Sector_Ct   0x0033   100   100   005    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000b   100   100   067    Pre-fail  Always       -       0
  8 Seek_Time_Performance   0x0005   146   146   020    Pre-fail  Offline      -       29
  9 Power_On_Hours          0x0012   086   086   000    Old_age   Always       -       99078
 10 Spin_Retry_Count        0x0013   100   100   060    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       75
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       989
193 Load_Cycle_Count        0x0012   100   100   000    Old_age   Always       -       989
194 Temperature_Celsius     0x0002   200   200   000    Old_age   Always       -       30 (Min/Max 16/44)
196 Reallocated_Event_Count 0x0032   100   100   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0022   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0008   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x000a   200   200   000    Old_age   Always       -       0

So I'm not sure why the error occurred. Maybe the controller is bad, the cable is dodgy or my server got hit by some stray cosmic rays somewhere and caused some kind of CRC error (yes, it can and does happen). The server has ECC memory so bitflips in the RAM should have been detected had they occurred.

Interestingly, this is the first failed disk I've had within a Linux MDADM array in over 20 years of running servers (I've had plenty of failed disks in Dell PERC controllers and whatever controllers Supermicro jam into their servers!). All previous arrays have been torn down before a disk failed.

As such this was also the first time I've had to rebuild an array. This particular RAID was running for over 11 years before this disk failed. For those interested, I followed this post by Redhat about the steps to take https://www.redhat.com/sysadmin/raid-drive-mdadm.

Should something similar happen again, I think I would run badblocks in non-destructive mode on the disk in situ, then if it passed push it back into the array for it to be rebuilt before I looked at buying a new disk.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Best Albums of 2022

Posted: 2023-01-02 14:03:49 by Alasdair Keyes

Direct Link | RSS feed


My most enjoyed albums of 2022...

  1. Tears for Fears - The Tipping Point
  2. Scorpions - Rock Believer
  3. Red Hot Chili Peppers - Return of the Dream Canteen
  4. KMFDM - Hyëna
  5. Crystal Method - The Trip Out

A special mention to Röyksopp's trilogy of albums Profound Mysteries I/II/III and a special note of disappointment on the new albums from Megadeth and Rammstein, both had a couple of good tracks, but nowhere near as good as I'd expect.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Happy New Year.... It would be a shame if that drive failed.

Posted: 2023-01-01 10:49:52 by Alasdair Keyes

Direct Link | RSS feed


A delightful New Year's Day gift.

A Fail event had been detected on md device /dev/md/0.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Javascript Asyncronous coding in

Posted: 2022-08-10 18:49:54 by Alasdair Keyes

Direct Link | RSS feed


Being a (primarily) back-end developer, my use of Javascript has been fairly limited. Mostly using jQuery and some basic raw Javascript in the olden-days for frontend validation, basic animation and alerts etc.

I've been writing a project in Quasar https://quasar.dev/ and delving further into the mysteries of async programming.

I was struggling to understand some documentation I found online. After some digging, it turns out that through various versions of ECMAScript, the concept of asynchronous programming has evolved and changed, giving you multiple ways of doing things with callbacks, promises and async/await.

I found this video on Youtube https://www.youtube.com/watch?v=PoRJizFvM7s which although only 24 minutes long gives a fantastic introduction to how each iteration of callbacks/promises/async/await has been developed, how they work and how to code against them.

It gives and intro with examples and that was all I need to get it straight in my head and make some real progress. It's a must-watch for anyone starting to get involved with Javascript other than simple libraries like jQuery.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Setting remote IP on Laravel controller tests

Posted: 2022-08-07 12:45:15 by Alasdair Keyes

Direct Link | RSS feed


When building a Laravel website you might want to create allow/block lists based on user's IP or from GeoIp information. This is easy enough using geoip2/geoip2 (https://packagist.org/packages/geoip2/geoip2) but how do you test your code is working correctly with specific IP addresses when writing your functional/integration tests?

At the beginning of a test that requires a custom IP you can add $this->serverVariables = ['REMOTE_ADDR' => '1.2.3.4']; and this will be what your controller sees in the IlluminateHttpRequest object.

public funtion testGeoIpFunctionality(): void
{
    $this->serverVariables = ['REMOTE_ADDR' => '1.2.3.4'];
    $response = $this->get('/website/endpoint');

    $response->assertStatus(200);
    ...
    // other assertions
    ...
}


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

"They" are watching... even quicker than before

Posted: 2022-05-06 20:45:08 by Alasdair Keyes

Direct Link | RSS feed


I wrote a blog post in 2019 about the website of a newly registered domain getting visited by a bot within 5 hours of the website coming online. You can read the article here - Security first, "they" are watching.

In short, I had surmised that the Certificate Transparency logs were being monitored to discover new sites so they could be scanned for vulnerabilities before an admin had a chance to harden the website.

I read an article today (https://portswigger.net/daily-swig/wordpress-sites-getting-hacked-within-seconds-of-tls-certificates-being-issued) which looks as if this premonition has come to pass. Wordpress websites are apparently getting hacked 'within seconds' of the TLS certificates being issue.

It looks like the logs are being tailed and visited much quicker than before... from 5 hours 3 years ago to <1 minute today.

I've steered clear of Wordpress for years now and often advise my clients to do the same. Although the usability and extensibility of Wordpress is fantastic, the scope for vulnerabilities in both plugins and the core code is too great to rely on. If you do run it, assess if you really need it for a public facing site and if you don't, add IP or Basic Authentication restrictions to your webserver config to restrict access to only those who need it.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Reducing dependencies and expanding Laravel Blade

Posted: 2022-05-05 08:05:06 by Alasdair Keyes

Direct Link | RSS feed


I've recently spent a couple of days moving my site to the latest version of Laravel. There were some problems upgrading through such a large number of major versions which I'll likely do a blog post about later.

Whilst I was re-adding my Composer dependencies I was looking at what I really needed. My blog posts are written in markdown and stored in a the database, the Laravel template engine, Blade, converts them from Markdown to HTML. For this task I was using the parsedown/laravel plugin (https://packagist.org/packages/parsedown/laravel)which uses erusev/parsedown (https://packagist.org/packages/erusev/parsedown)underneath to do the actual markdown processing.

I try to minimise dependencies used for two reasons,

  1. Reduced complexity in the codebase
  2. Reduced attack vectors either from attacks directly against my site or supply chain attacks through PHP's Composer system.

Whilst browsing through the Laravel Docs I noticed that they have an inbuilt Str::markdown (https://laravel.com/docs/9.x/helpers#method-str-markdown) helper which might allow me to do the same thing. Under the hood it uses Commonmark from the PHP League (https://commonmark.thephpleague.com/)

I used a couple of custom options on Parsedown, which I needed to be sure worked with the Laravel version.

$parseDown = Parsedown::instance();
$parseDown->setUrlsLinked(false);
$parseDown->setMarkupEscaped(false);

setUrlsLinked: false means that URLs aren't automatically converted into a href links and setMarkupEscaped: false means that I can include HTML markup in my blog posts if I desire.

After reading through the Common mark docs I the relative options were allow_unsafe_links: true and html_input: allow flag and I'd be set. Although these are defaults for Commonmark, I want to explicitly declare them in case defaults change in future.

I only used markdown in my templates and Parsedown automatically adds a blade directive of @parsedown("# Markdown Title") which I made use of. My first task was to create a Blade directive to process markdown in my templates, I decided on the name processMarkdown()

I created app/Providers/CustomBladeFunctionProvider.

<?php
  
declare(strict_types=1);

namespace App\Providers;

use Illuminate\Support\Facades\Blade;
use Illuminate\Support\ServiceProvider;
use Illuminate\Support\Str;

class CustomBladeFunctionProvider extends ServiceProvider
{
    /**
     * Bootstrap the application services.
     *
     * @return void
     */
    public function boot(): void
    {
        // Provide @processMarkdown
        Blade::directive('processMarkdown', function ($parameter) {
            return "<?= rtrim(Str::markdown($parameter, [ 'allow_unsafe_links' => true, 'html_input' => 'allow' ])); ?>";
        });
    }
}

Then added this to my providers in config/app.php.

...
    'providers' => [
        ...
        App\Providers\CustomBladeFunctionProvider::class,
        ...
    ],
...

Then all I needed to do was update my templates from

@parsedown($blogPost->body)
@processMarkdown($blogPost->body)

And then remove the parsedown/laravel dependency to slim down my codebase.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-28fc6e6b4b


Validate HTML 5