LXC

Posted: 2015-08-10 16:30:26 by Alasdair Keyes

Direct Link | RSS feed


Containerisation seems to be taking over the OpenSource world at the moment, the increased uptake of systems like Docker is enabling systems admins and developers to perform rapid app deployment with increased portability.

Parallels proved the power of containerisation with their OpenVZ and Virtuozzo containerisation platforms showing that with the lower overheads you could run 2-3 times as many containers as you could VMs on a given bit of hardware. Containers aren't quite as isolated as VMs, but for most use cases they will do the job.

I had a cause today to require a number of machines that I could test some client server code at scale. Creating a number of VMs from scratch would be a daunting task (not to mention resource intensive) so I decided I'd give LXC a whirl. I'd been aware of LXC and what it can do for some time but I'd never tried it, I found quite a few of the articles were lacking a bit on setup so I thought I'd document my findings so others didn't have to find out the hard way.

This was installed on my LinuxMint desktop, the one package that a lot of guides missed out was the templates. Templates are build scripts build up containers for you.

sudo apt-get install lxc lxc-templates

If you look at ifconfig on the host, you'll see a new bridge interface created for your Containers to connect onto. DHCP is provided on the 10.0.3.0/255 range by dnsmasq providing your containers access to the same network as your host. It will also allow access to the internet through the host machine.

$ ifconfig lxcbr0
lxcbr0    Link encap:Ethernet  HWaddr fe:c6:fc:75:66:ae  
          inet addr:10.0.3.1  Bcast:10.0.3.255  Mask:255.255.255.0
          inet6 addr: fe80::6482:4aff:fea8:407f/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3863 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5951 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:362523 (362.5 KB)  TX bytes:6926072 (6.9 MB)

If you're looking to run debian based containers you'll need to install debootstrap

sudo apt-get install debootstrap

and if you're wanting CentOS, you'll need yum

sudo apt-get install yum

Once this is done you have everything you need.

First see what templates are available

# ls /usr/share/lxc/templates/
lxc-alpine     lxc-busybox  lxc-debian    lxc-gentoo        lxc-oracle  lxc-ubuntu
lxc-altlinux   lxc-centos   lxc-download  lxc-openmandriva  lxc-plamo   lxc-ubuntu-cloud
lxc-archlinux  lxc-cirros   lxc-fedora    lxc-opensuse      lxc-sshd

The templates names are the files shown with the lxc- prefix removed. First I'll build up a centos Box

# lxc-create -t centos -n lxc-centos-2
Host CPE ID from /etc/os-release: 
This is not a CentOS or Redhat host and release is missing, defaulting to 6 use -R|--release to specify release
Checking cache download in /var/cache/lxc/centos/x86_64/6/rootfs ... 
Cache found. Updating...
Loaded plugins: fastestmirror
Setting up Update Process
base                                                                                    | 3.7 kB     00:00     
base/primary_db                                                                         | 4.6 MB     00:02     
extras                                                                                  | 3.4 kB     00:00     
extras/primary_db                                                                       |  26 kB     00:00     
updates                                                                                 | 3.4 kB     00:00     
updates/primary_db                                                                      | 749 kB     00:00     
No Packages marked for Update
Loaded plugins: fastestmirror
Cleaning repos: base extras updates
0 package files removed
Update finished
Copy /var/cache/lxc/centos/x86_64/6/rootfs to /var/lib/lxc/lxc-centos-2/rootfs ... 
Copying rootfs to /var/lib/lxc/lxc-centos-2/rootfs ...
sed: can't read /etc/init/tty.conf: No such file or directory
Storing root password in '/var/lib/lxc/lxc-centos-2/tmp_root_pass'
Expiring password for user root.
passwd: Success

Container rootfs and config have been created.
Edit the config file to check/enable networking setup.

The temporary root password is stored in:

        '/var/lib/lxc/lxc-centos-2/tmp_root_pass'


The root password is set up as expired and will require it to be changed
at first login, which you should do as soon as possible.  If you lose the
root password or wish to change it without starting the container, you
can change it from the host by running the following command (which will
also reset the expired flag):

        chroot /var/lib/lxc/lxc-centos-2/rootfs passwd

The example above is quite a short output, when you run it for the first time, you will get much more output as LXC grabs all the files it needs from the CentOS repository.

Now just run it

 lxc-start -n lxc-centos-2
CentOS release 6.7 (Final)
Kernel 3.19.0-25-generic on an x86_64

lxc-centos-2 login: init: rcS main process (7) killed by TERM signal
Entering non-interactive startup
iptables: No config file.                                  [WARNING]
Bringing up loopback interface:                            [  OK  ]
Bringing up interface eth0:  
Determining IP information for eth0... done.
                                                           [  OK  ]
Starting system logger:                                    [  OK  ]
awk: cmd. line:1: fatal: cannot open file `/etc/mtab' for reading (No such file or directory)
Mounting filesystems:                                      [  OK  ]
Generating SSH2 RSA host key:                              [  OK  ]
Generating SSH1 RSA host key:                              [  OK  ]
Generating SSH2 DSA host key:                              [  OK  ]
Starting sshd:                                             [  OK  ]


CentOS release 6.7 (Final)
Kernel 3.19.0-25-generic on an x86_64

lxc-centos-2 login: 

There you are, you can see all your containers with

# lxc-ls
lxc-centos-1  lxc-centos-2  
# lxc-info -n lxc-centos-2
Name:           lxc-centos-2
State:          RUNNING
PID:            21055
IP:             10.0.3.201
CPU use:        1.06 seconds
BlkIO use:      56.00 KiB
Memory use:     2.85 MiB
KMem use:       0 bytes
Link:           veth3GI7HY
 TX bytes:      1.42 KiB
 RX bytes:      5.25 KiB
 Total bytes:   6.67 KiB

You can see the network interface for the container on your host

# ifconfig veth3GI7HY
veth3GI7HY Link encap:Ethernet  HWaddr fe:c6:2d:53:f1:d8  
          inet6 addr: fe80::fcc6:2dff:fe53:f1d8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:13 errors:0 dropped:0 overruns:0 frame:0
          TX packets:36 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1458 (1.4 KB)  TX bytes:5607 (5.6 KB)

You can see how lightweight these containers are in disk usage

# du -hs /var/lib/lxc/lxc-centos-2/
385M/var/lib/lxc/lxc-centos-2/

To test it's speed lets create 10 containers

# time for NUM in `seq 10 20`; do lxc-create -t centos -n lxc-centos-$NUM; lxc-start -d -n lxc-centos-$NUM; done
...
...

real0m52.756s
user0m36.676s
system 0m16.356s

52 seconds to create and start 10 Containers

# lxc-ls
lxc-centos-10  lxc-centos-11  lxc-centos-12  lxc-centos-13  lxc-centos-14  lxc-centos-15  lxc-centos-16  lxc-centos-17  lxc-centos-18  lxc-centos-19  lxc-centos-20  

Don't need them anymore? lets just get rid of them.

# time for NUM in `seq 10 20`; do lxc-destroy -f -n lxc-centos-$NUM; done

real0m3.815s
user0m0.176s
system 0m2.744s

Each container runs SSH, so you can treat it as just another server when it comes to management. For development an use of leightweight systems, LXC really is the way forward.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Critical Firefox Vulnerability

Posted: 2015-08-07 12:14:43 by Alasdair Keyes

Direct Link | RSS feed


If you're not aware, a critical Firefox vulnerability has been found being exploited in the wild.

https://blog.mozilla.org/security/2015/08/06/firefox-exploit-found-in-the-wild/

It can affect Windows, Linux and MacOSX (althouth Mac hasn't been actively exploited) and allows a remote attacker to read any files you have permissions for on your local machine.

It's pretty shocking and if you use Firefox, you should update straight away. I mean, don't even finish this article. Upgrade it and then come back. There's no details how long this exploit has been in the wild for, so potentially, a lot of personal data has been hoovered up.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Finding subdomain IPs

Posted: 2015-07-31 16:58:57 by Alasdair Keyes

Direct Link | RSS feed


While analyzing my weblogs, I noticed a my site getting crawled by a server/hostname that I'd previously been receiving spam from, they gathered site data and then sent spam based on what they'd found.

I thought it was worth stopping this, but from what I could see the site was being scanned from more than one subdomain. I could block just the hostnames that had accessed my site, but I thought it was worth taking a more proactive stance.

https://gitlab.com/snippets/1731307/raw

I wrote the attached script to try and find all subdomains so I could block the IPs.

Obviously, if AXFR zone transfer is enabled for the domain, that's the way to get the information, but most nameservers have that disabled.

The script uses the dig tool via bind-utils in Redhat based distros or dnsutils in Debian based ones.

A quick breakdown of it's use - google.com for a test...

$ ./find_subdomain.sh google.com
admin.google.com.96INA74.125.195.113
admin.google.com.43INAAAA2a00:1450:400c:c01::64
api.google.com.43INCNAMEapi.l.google.com.
api.l.google.com.96INA74.125.195.105
...
www.google.com.96INA74.125.195.147
www.google.com.43INAAAA2a00:1450:400c:c01::63

Show just IPs, not CNAME entries

$ ./find_subdomain.sh google.com -a
admin.google.com.96INA74.125.195.113
admin.google.com.43INAAAA2a00:1450:400c:c01::64
api.l.google.com.96INA74.125.195.105
...
www.google.com.96INA74.125.195.147
www.google.com.43INAAAA2a00:1450:400c:c01::63

Get just the IPs

$ ./find_subdomain.sh google.com -a -i
74.125.195.113
2a00:1450:400c:c01::64
74.125.195.105
...
74.125.195.147
2a00:1450:400c:c01::63

Get just IP v4 or v6 with the -4 and -6 switches. It will output duplicates if subdomains are on the same IPs, so filtering through sort -u is useful, using with xargs to build up iptables rules or similar.

$ ./find_subdomain.sh google.com -i -a -4 | sort -u | xargs -i echo iptables -A INPUT -s '{}' -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 108.170.217.164 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 108.170.217.165 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 108.170.217.166 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
...
iptables -A INPUT -s 64.9.224.68 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
iptables -A INPUT -s 64.9.224.69 -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

CentOS 6.x High CPU usage in power_saving threads

Posted: 2015-07-29 09:02:08 by Alasdair Keyes

Direct Link | RSS feed


I was awoken at 3am this morning by a backup server complaining about high CPU... because these things never happen at a nice friendly time like 3pm Monday to Friday.

It looked that the kernel's power saving threads were using a lot of cpu on this CentOS 6.6 box.

# top

top - 08:40:21 up 84 days, 22:46,  1 user,  load average: 7.49, 7.51, 7.60
Tasks: 269 total,  14 running, 255 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us, 72.8%sy,  0.0%ni, 27.2%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:  16249656k total,  2427188k used, 13822468k free,   185756k buffers
Swap:  4194300k total,        0k used,  4194300k free,  1523072k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND                                                                                                                                              
 2089 root      -2   0     0    0    0 R 100.0  0.0 359:08.11 power_saving/4                                                                                                                                      
 2086 root      -2   0     0    0    0 R 97.4  0.0 358:49.54 power_saving/1                                                                                                                                       
 2088 root      -2   0     0    0    0 R 97.4  0.0 358:55.72 power_saving/3                                                                                                                                       
 2085 root      -2   0     0    0    0 R 95.7  0.0 358:42.58 power_saving/0                                                                                                                                       
 2087 root      -2   0     0    0    0 R 95.7  0.0 359:00.27 power_saving/2                                                                                                                                       
 2090 root      -2   0     0    0    0 R 95.7  0.0 359:10.27 power_saving/5          

Stopping the acpid service didn't seem to help and the only thing that resolved the issue was to unload the acpi_pad module from the kernel. At which the power_saving threads were removed and the load dropped again.

# service acpid stop
Stopping acpi daemon:                                      [  OK  ]
# lsmod | grep acpi_pad
acpi_pad               87985  0 
# rmmod acpi_pad
# lsmod | grep acpi_pad
#

There is a Redhat advisory of a similar issue, but this indicates that it's only on CentOS 6.2, but it appears that it has continued into later revisions of the Redhat kernel.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

akeyes.co.uk - A+ on SSLLabs

Posted: 2015-07-06 15:55:23 by Alasdair Keyes

Direct Link | RSS feed


SSL Labs SSL Test now scores https://akeyes.co.uk an A+ - See here


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Jenkins repository errors with SSH and git

Posted: 2015-07-06 11:04:23 by Alasdair Keyes

Direct Link | RSS feed


I've been experimenting with Jenkins continuous integration (CI) suite.

CI seems to be getting widely adopted in businesses so I thought it would be good to familiarise myself with it.

I created a git repo with SSH/key pair access and created a new project in Jenkins. In the project configuration, I was able to connect to the repository with the following URL

ssh://git@X.X.X.X:/home/git/repo/test1.git/

But all my builds were failing with the following message...

Started by user anonymous
Building in workspace /var/lib/jenkins/jobs/test1/workspace
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url ssh:///git@X.X.X.X:/home/git/repo/test1.git/ # timeout=10
Fetching upstream changes from ssh:///git@X.X.X.X:/home/git/repo/test1.git/
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress ssh:///git@X.X.X.X:/home/git/repo/test1.git/ +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from ssh:///git@X.X.X.X:/home/git/repo/test1.git/
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:735)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:983)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1016)
at hudson.scm.SCM.checkout(SCM.java:485)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1282)
at hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:610)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:532)
at hudson.model.Run.execute(Run.java:1741)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:98)
at hudson.model.Executor.run(Executor.java:381)
Caused by: hudson.plugins.git.GitException: Command "git -c core.askpass=true fetch --tags --progress ssh:///git@X.X.X.X:/home/git/repo/test1.git/ +refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: ssh: Could not resolve hostname : Name or service not known
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1591)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandWithCredentials(CliGitAPIImpl.java:1379)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl.access$300(CliGitAPIImpl.java:86)
at org.jenkinsci.plugins.gitclient.CliGitAPIImpl$1.execute(CliGitAPIImpl.java:324)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:733)
... 11 more
ERROR: Error fetching remote repo 'origin'
Finished: FAILURE

Eventually, I noticed that the URL that was being connected during the build had an extra / in the ssh protocol definition.

It turns out that the git pluging does some escaping and ends up adding the extra /, although the Git URL is perfectly valid, it is a sloppy url as the SSH port is not explicitly defined. Change it to

ssh://git@X.X.X.X:22/home/git/repo/test1.git/

And it worked with no issues and proceeded to running the build/test scripts.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

imapsync

Posted: 2014-07-25 10:46:27 by Alasdair Keyes

Direct Link | RSS feed


I'm in the middle of migrating systems at the moment, part of which involves migrating mailboxes between seperate systems, the source system is using Maildir format mailbox and the destination is using Cyrus, so I'm unable to just copy the files from the filesystem.

Both have IMAP access so I can use the fantastic imapsync util to help http://imapsync.lamiral.info/. I hadn't come across it before, but if you want a basic sync of mailboxes, the following will do what you want. Obviously, use IMAPS as there's no reason not to..

imapsync 
    --host1 srchost.com --port1 993 --user1 dstuser --password1 pass1 
    --host2 dsthost.com --port2 993 --user2 srcuser --password2 pass2 
    --pidfile '/tmp/email@domain.com-imapsync.pid' 
    --nofoldersizes 
    --noreleasecheck 
    --subscribe_all 
    --delete2 
    --ssl1 
    --ssl2 
    --addheader

If you every need to perform any kind of migration, this will make your life a lot easier.


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

Sysadmin Appreciation Day

Posted: 2014-07-17 10:49:38 by Alasdair Keyes

Direct Link | RSS feed


The day will be here soon, appreciate your sysadmins for the hard work they do to make sure you can keep doing your work.

http://sysadminday.com/


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

JSON Resume

Posted: 2014-07-07 17:20:41 by Alasdair Keyes

Direct Link | RSS feed


Viewing Hacker News today, I saw a new project called JSON Resume.

JSON Resume is community driven open source initiative to create a JSON based standard for résumés. This is something I'd never thought about before, but is something that is in drastic need of standardisation.

Before the days of online recruitment and Digital CVs, a non-standard CV was a good thing. It let you stand out from the crown in a big pile of paper, drawing the eye of the recruiter.

Nowdays the opposite is almost true, you want your CV online and searchable by the largest amount of people, a complex or fancy CV might do you more harm than good. Many agencies will auto convert Word Document or PDF files into text that they can send out to prospective employers, but this would be even better if there were an overall standard format that could be used for everyone and easily searchable by recruiters and employers.

It has the benefit that the style can be separated from the content. Similar to the idea of CSS and HTML.

The project is in it's infancy and the specification is still a work in progress and will most definitely change in the near future, but I think it's worth supporting. To that end, please see my CV in JSON format


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

.uk Released

Posted: 2014-06-11 13:24:29 by Alasdair Keyes

Direct Link | RSS feed


Over the past few months many more TLDs have been released (Such as .wtf .ninja etc) and today the new UK TLD .uk was released.

This seems to have taken off in a big way, much more interest from customers than I'd expected. So, make sure you buy your .uk domain!

If you own the .co.uk, the .uk version has been reserved for you for 5 years, if you don't know where to register your domains, try Daily.co.uk


If you found this useful, please feel free to donate via bitcoin to 1NT2ErDzLDBPB8CDLk6j1qUdT6FmxkMmNz

© Alasdair Keyes

IT Consultancy Services

I'm now available for IT consultancy and software development services - Cloudee LTD.



Happy user of Digital Ocean (Affiliate link)


Version:master-90480fad99


Validate HTML 5