Stuart McCulloch Anderson

See Projects for more

Blog

Back to Linux

I got into computing seriously back in 1996 when I got my first PC and stopped using the family machine. Like most computers it came pre installed with Windows 95 at the time I was happy. The machine worked and I could play some games and do some home work, but then year or so later I was walking around a PC World in Glasgow and seen this bright blue box with a big red hat on the cover. I’d no idea what I’d found or was about to do to my 12 year old brain. At the time I just knew I had to have it cause it looked cool. Some how I’d find a retail box for Red Hat Linux 4.

Git Recovery

This is just something that is worth keeping to hand. I’ve been using git for all my code and scripts for years. Since I’m usually the only developer on any of my projects people tend to ask why.

Back to Jekyll

So it another year and another change to my site. This time I’m moving from Wordpress to Jekyll, again.

This isn’t an ideological change. I still use WordPress on some other sites and recommend it many clients. Instead I’m making a change from purely practical reasons. I no longer blog on-the-go so don’t need the power and convenience of a live database driven CMS site and most of my most popular content had been my indepth brakedowns and how to guides. Since most of these are written over days or weeks and aren’t time sensitive the use of a slower publishing workforce doesn’t cause my any harm.

A big advantage to the switch is hosting cost. Wordpress is completely free. If you want to fire up a new blog you can do it in minutes for nothing, excluding hosting costs and even that can be mitigated by using a service like WordPress.com. But there can be hidden costs. I ran alot of plugins on my site; Jetpack & Wordfence being the biggest but there are others too that supported ad rotation and my photo galleries. Using jekyll I’m getting away from all of that, and even my hosting costs can come down since I want need to run as powerful a server. Because of this I’m switch my blog to a completely adfree site.

I still will use by amazon affiliate links when I review or recommend a product - they don’t change your price, but they do kick a couple of quid back to the site which will help fund future reviews and guides - and I have my Patreon account.

If you want to help keep this site independent and adfree you can support my work by use any of options listed below each post. I’m setting up a Patreon page, like everyone else does now. If it takes off I’ll start working on some member perks but for now the only perk I can offer is a warm afuzzy feeling that comes from knowing your helping me keep creating great content for this site.

The flip side of this new redesign is content migration. I’ve been here before, rebuilding the site from the ground up, and spent hours pulling content from the old site, usually rewriting it in the new format but this time I’m not going to. I’ve gone through the site and I’ve pulled articles which are still relivent or just popular and moved them bit anything else is being left behind. This is a new start and a new format so I’m going to make full use of it. That said, I’ll keep an eye on the 404 logs and it anything keeps cropping up I’ll concider pulling that over to.

After this prooning of content is really slashed the content available on this site. So I’m working on my content schedule. A plan I’m seeing out for what comes next. Once it’s started I’ll publish it as a living document and keep up dating it as new things are completed or new ideas come to me.

Remove Git Submodule

To remove a submodule you need to:

  • Delete the relevant section from the .gitmodules file.
  • Stage the .gitmodules changes git add .gitmodules
  • Delete the relevant section from .git/config.
  • Run git rm --cached path_to_submodule (no trailing slash).
  • Run rm -rf .git/modules/path_to_submodule (no trailing slash).
  • Commit git commit -m "Removed submodule <name>"
  • Delete the now untracked submodule files rm -rf path_to_submodule

Raspberry Pi Yearly Running Cost

I don’t know about you but I run a number or Raspberry Pi’s in my house all doing different jobs. I’ve often heard it said how inexpensive a Pi is to run but I never how inexpensive, and I wanted some real world figures.

After a little time with the good all Google I came across this form post by audigex from 2012 so I’ve used his calculations, just updated the figures.

In the same vane as audigex’s original post I’ve taken the worst case and a more average look. A Raspberry Pi uses 5W maximum, 5V x 1A = 5W, in theory but it should never go higher than 700mA which is only 3.5W.

I really had to search around but the most expensive unit price I could find at present, January 2015, was £0.24 per kWh. I won’t name and shame the company here, but believe me if you’re paying that much you will be hard pressed not to beat it!

Worst Case    
Raspberry Pi Power (Watts) 5W  
Hours to user 1kWh 200 h = 1000 / 5
Hours in year 8765.81 h  
Raspberry Pi per year 43.83 kWh = 8765.81 / 200
Cost per kWh £0.24  
Yearly Running Cost £10.52 = 43.83 * 0.24

For a more realistic look I down graded the total watt usage to 3.5W as discussed above and took the average unit cost straight off the UK Gov website, and The Department of Energy & Climate Change Quarterly Energy Prices published on the 18th December 2014. According to official Government statistics the average cost for a kWh unit is £0.15 pence, personally I’m paying slightly less than the average but the figure is still a valid one for this analysis.

Realistic Values    
Raspberry Pi Power (Watts) 3.5W  
Hours to user 1kWh 286 h = 1000 / 3.5
Hours in year 8765.81 h  
Raspberry Pi per year 30.68 kWh = 8765.81 / 200
Cost per kWh £0.15  
Yearly Running Cost £4.60 = 30.68 * 0.15

So based on, what I freely admit is back of the napkin math, a Raspberry Pi costs between £4.60 and £10.52 per year. Obvisoully this may be slightly higher if you are also running a USB hub or any external storage.

I hope this is of use to someone else. If you have noticed any flaws in my calculations please let me know in the comments bellow.

A cure for Crons chronic email problem

Anyone who has setup a backup system on their Linux machine, and I hope you all have, will be well aware of the problems when running commands from crontab. You will be inundated with emails every-time cron runs and with so many emails its easy to get to a point where you just stop reading them so never notice that Friday night when the backups stopped due to some error and from that point on they never ran correctly again.

One solution most of us will be familiar with is simple to direct all command output to /dev/null 15 01 * * * backup_my_pc >/dev/null 2>&1 but this now mean we want get any feedback - whether the backup ran correctly or not!

After a little time spent with Google I found a program called Chronic. It acts as a wrapper script within the cron shell. So now instead of having 15 01 * * * backup_my_pc as your crontab command you use 15 01 * * * cronic backup_my_pc. Cronic will then run your shell command so it can handle all output from your command. If the command fails the full output is printed to the shell, so cron sends it as an email, but if no error occurs all output is hidden and no email is sent. A perfect solution.

Installation

The best way to install Cronic is simply to download the shell script from the project website. Copy the download into your PATH, usually /usr/bin will be fine. Then just start updating your crontab rules.

agedu: Clean up wasted space in Linux

After using a computer for long enough we will all eventually run out of hard drive space, it really is just a fact of modern computing. The big question is what do you do next?

In some situations a quick trip to get a drive is required but in many situations space can easily be reclaimed by removing the gunk that’s accumulated, but how do you determine what’s junk? Linux has the du command that will recursively search a director and list all files and there size but it still comes down to you to determine what should be kept and what should be removed.

In comes agedu (age dee you). Like du this new tool searches for files in all directories and lists there size, but it can also differentiate between files that are still in use and ones that haven’t been accessed less often.

From the man pages

agedu scans a directory tree and produces reports about how much disk space is used in each directory and sub-directory, and also how that usage of disk space corresponds to files with last-access times a long time ago.

In other words, agedu is a tool you might use to help you free up disk space. It lets you see which directories are taking up the most space, as du does; but unlike du, it also distinguishes between large collections of data which are still in use and ones which have not been accessed in months or years - for instance, large archives downloaded, unpacked, used once, and never cleaned up. Where du helps you find what’s using your disk space, agedu helps you find what’s wasting your disk space.

agedu has several operating modes. In one mode, it scans your disk and builds an index file containing a data structure which allows it to efficiently retrieve any information it might need. Typically, you would use it in this mode first, and then run it in one of a number of `query’ modes to display a report of the disk space usage of a particular directory and its sub-directories. Those reports can be produced as plain text (much like du) or as HTML. agedu can even run as a miniature web server, presenting each directory’s HTML report with hyperlinks to let you navigate around the file system to similar reports for other directories.

So, the install

Fedora 18, 19, 20 & 21

sudo yum install agedu
Loaded plugins: langpacks
Resolving Dependencies
--> Running transaction check
---> Package agedu.x86_64 0:0-8.r9153.fc21 will be installed
--> Finished Dependency Resolution

Dependencies Resolved

========================================================================================
 Package        Arch            Version                      Repository            Size
========================================================================================
Installing:
 agedu          x86_64          0-8.r9153.fc21               fedora                55 k

Transaction Summary
========================================================================================
Install  1 Package

Total download size: 55 k
Installed size: 88 k
Is this ok [y/d/N]: y
Downloading packages:
agedu-0-8.r9153.fc21.x86_64.rpm                                       |  55 kB  00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction (shutdown inhibited)
  Installing : agedu-0-8.r9153.fc21.x86_64                                           1/1
  Verifying  : agedu-0-8.r9153.fc21.x86_64                                           1/1

Installed:
  agedu.x86_64 0:0-8.r9153.fc21

Complete!

Ubuntu/Debian

sudo apt-get install agedu

Basic Usage

The first step is to let agedu scan a directory, bellow I’ve just scanned my Downloads folder:

agedu -s ./Downloads/
Built pathname index, 1032 entries, 96364 bytes of index
Faking directory atimes
Building index
Final index file size = 190304 bytes

To access the report you need run agedus built in web server:

agedu -w
Using Linux /proc/net magic authentication
URL: http://127.0.0.1:43051/

Now just fire up your browser and go to the URL stated:

Conclusion

There are other options available such as --exclude and --include arguments which let you control what files are indexed, for example if you wanted to see what ISOs were taking up the most space you’d use: agedu -s ./ --exclude '*' --include '*.iso'

This post was designed to written to give you a quick overview of agedu since I have only touched on the options available. Check out the man pages or read thru the developers website for more details.

Revert to a previous Git commit

I make heavy use of git for all my software development, when asked what the point is for a one man development team to something as powerful as git I always reply “universal undo”.

With a recent update to the site i finally got the chance to use it the way I’d always expected to, and it worked exactly as expected but the correct process was harder to find than expected. So here is how I was able to revert my master git branch after committing some bad code:

Reverting Working Copy to Most Recent Commit

To revert all uncommitted changes back to the previous commit: git reset --hard HEAD where HEAD is the last commit in your current branch

Reverting Working Copy to an Older Commit

This is a some what controversial step, but it was what I needed and the only thing I could find that would work. The better option is to avoid a hard reset if other people have copies of the old commits, because using a hard reset like this will force them to have to resynchronize their work with the newly reset branch. This isn’t a problem for me, but it is worth mentioning encase it would be for you.

To revert back to an already committed change:

    # Resets index to former commit; replace 'ad957a69' with your commit code
    git reset ad957a69

    # Moves pointer back to previous HEAD
    git reset --soft HEAD@{1}
    git commit -m "Revert to ad957a69"

    # Updates working copy to reflect the new commit
    git reset --hard

Read more from the blog right here