Up-cycling an old server?

So For many years i’ve had an old RM server sitting under my bench, powered off, gathering dust inside and out. But no more – it’s time for it to either do something or be recycled. So, as a classic hoarder, i’m going to up cycle it rather than let a really good case go to waste.

RM Server Case

RM Server Case

So the case is a “RM systembase TX” by Research Machines, which is actually a “Antec SX1040BII Performance Series II SOHO File Server” that’s been rebadged. Originally the spec would have been something like single or dual processors, sub 1GHz, but I can tell you that the RAM was PC6700, all 256MB of it (RM used to include that on the sticker on the back, that remains today). That also would have been top spec at the time. I vaguely remember there also being a a PCI SCSI card for the 18GB hard drives and the tape drive. The case actually supports EATX motherboards, and had 2*3bay removable hard drive caddies (one slot used by mandatory floppy disk drive – LOL), as well as 4*5+1/4in drives, it has (remaining) 3 case fans too.

But, in the past i had already “upgraded” the system. So out with that rubbish… I removed;

  • MicroATM mobo with,
  • P4 2GHz,
  • 2*1GB RAM,
  • IDE hard drive,
  • IDE CD Drive (left in place but not connected),
  • 3 random PCI ‘fast’ ethernet cards

and i had replaced the PSU to something nice too (500w OCTIGEN jobby).

old mobo

old mobo

old mobo

old mobo

removable hard drive cage

removable hard drive cage





and in with the new…. well, 2nd hand but new to me?

  • HP xw4600 ATX motherboard,
  • 2GB of PC2 6400 RAM,
  • Intel Core2Duo processor E6550,
  • brand new Arctic, Alpine 11 CPU cooler (with some light filing mod),
  • 3 old hard SATA drives
    • 500GB Seagate,
    • 1TB Toshiba,
    • 1TB Western Digital Black,
  • EVGA GEFORCE GTX 750Ti (kindly donated to me for folding purposes).

Awesome – so i start some CPU folding using my .deb package from a few posts ago and then i also install lm-sensors by;

sudo apt-get install lm-sensors 
sudo sensors-detect

Which gives me readings of 29deg for the GPU and 55/56 for the two CPU cores.

loaded CPU temps

loaded CPU temps

So now to install the correct graphics drives and try again with GPU folding. (367.57 driver which now seems to just work from the off, or maybe i just didn’t configure it correctly last time??? probably.) After a restart the sensors command also returns more than just the CPU temps, but now the graphics is missing…. typical. so here’s the sensors data with the VNIDIA X Server Settings showing the temps.

the graphics and sensors data

the graphics and sensors data

Right, best leave this for a while to make sure that the CPU heatsink is installed and dissipating correctly then i’ll start the good work on the machine. Time for dinner!

Installing a GTX 750ti for Folding at Home on ubuntu

So first things first, update the OS to about 16.04 LTS.

Second thing install the graphics driver using Synaptic package manager.

Third install FAH, and then connect to it…..

Folding @ Home error

So back to 14.04.4…. YAY, a working folding machine.

Folding @ home back on 14.04.4

So what about power consumption?

So my M600 blades at full pelt with the 8 cores can do ~23000PPD but use around ~360w (without the chassis).
My QC Core 2 Quad above without the graphics card at full pelt can do ~6500PPD and uses ~110w.
The same QC but now with a GTX750Ti at full pelt on the graphics and 3 cores on the CPU ~38000PPD using ~150w.

So to compare efficiencies (based on 24hr period):
the M600 server blade 23000/8.64kWh = 2662 points/kWh,
the QC Core 2 Quad 6500/2.62kWh = 2480 points/kWh,
the QC with 750Ti 38000/3.6kWh = 10500 points/kWh.

So the QC with the 750 clearly wins and would generate around 13,870,000 points/year and cost £130 a year to run.


Better analytics for Folding@Home…

… were needed IMO so, I decided to build my own and they’re now at http://fah.crazy-logic.co.uk. it’s not a nice interface at the moment but the data is being collected in the background 🙂

The home page shows the top 10 teams for score and also for work units, the ten trending (not worked on this algorithm yet) and the ten newest teams with non zero scores/WU. Clicking on a teams name displays the teams info and some historic values.

Here’s how it works, 4 times a day data is pulled from FAH, then added to the database. New rankings for score and work units are then calculated. Once a day a history entry is generated and then in the background in batches (of 500 records) the score differences and that are calculated… basically lots of scripts running in the background, triggered by cron, and theres a table recording time, script and time taken and also if the script gets to the end or times out.

Things I need to do:

  • A nice interface
  • fix the efficiency of the differencing script…. it’s a massive fail atm. (indexing)
  • Need to add some basic site stats, days, records, size… queries per day and database use?
  • I will be adding some graphs in the future,
  • Signature thingy.

Oh and then I found: http://folding.extremeoverclocking.com/team_list.php

edit {hack} a .deb package to remove user options – silent install.

So, with my new Dell M1000e and M600 blades now sitting in their new home in my workshop and having tested all the blades are working (-1) I now need to load test them. I could just use a program designed for this, but i’m thinking that as i’ll be burning through a chunk of electricity i’ll put the compute cycles to good use (the heat will be keeping me warm). So i’m going to do some Folding at Home. If you don’t know about this, then you should check out this great project.

My team id is 232280. Feel free to join me.

Anyway, i’ll be running Ubuntu on the blades for a while (when I can get the latest version installing from USB), so I played around with installing inside a VM. The instructions for this can be found http://folding.stanford.edu/home/guide/linux-install-guide/ but take them with a pinch of salt as a few links are broken/need fixing (i’m letting them know on the forum). The installer asks for user details as part of the package and you can’t do an ‘unattended install’ which is what I was after (for a very specific reason). NVM, i’ll just figure out how to disassemble the .deb package and change it.

I haven’t done this before so I started not know that it was possible, but it’s actually quite simple. I found a post explaining how to open and then recreate the .deb package.

mkdir tmp 
dpkg-deb -R original.deb tmp 
# edit DEBIAN/postinst 
dpkg-deb -b tmp fixed.deb 

…and then had a bit of a read up of what I found inside – read this, sec 7.6. After a bit of head scratching and a few edits that didn’t work I finally worked out what I needed to do to remove the user interaction. (some of this helped.) Rename the templates file and comment out the lines referencing db_get then add in the variables for user, team, passkey, power and autostart. (One thing I would like to do is add some sort of reference to different machines MAC address in the user but i’ll look into that later.) After doing this i’ve hacked the .deb into something that I can script silently – which means remote deployment :D.

I also came across https://www.linux.com/learn/writing-simple-bash-script while wondering the web – it’s really well written for beginners like me.

You can find my hacked .deb here. My team id is baked in. Install using:

wget http://www.crazy-logic.co.uk/wp-content/uploads/2017/01/FAHinstaller.deb
sudo dpkg -i --force-depends FAHinstaller.deb

ver 2 now includes the mac address in the username so i can see how much each blade will have done.

The DHT11 temperature-humidity sensors pt 2

Here’s the graphs from my recent experiment, and here’s the data.

Temperature from 11 DHT11 sensors over 22900 data readings for each sensor

Temperature from 11 DHT11 sensors over 22900 data readings for each sensor

Humidity from 11 DHT11 sensors over 22900 data readings for each sensor

Humidity from 11 DHT11 sensors over 22900 data readings for each sensor

min, avg, max and variance of temperature from 11 DHT11 sensors over 22900 data readings for each sensor

min, avg, max and variance of temperature from 11 DHT11 sensors over 22900 data readings for each sensor

min, avg, max and variance of humidity from 11 DHT11 sensors over 22900 data readings for each sensor

min, avg, max and variance of humidity from 11 DHT11 sensors over 22900 data readings for each sensor

and a brief video…

The DHT11 temperature-humidity sensors

So I have loads of these from an old project that you guessed it – I never got round to, but how accurate are these cheapest of cheap things?

Set up an Arduino/RasPi with all the sensors and get it to read them all back – if all in one location then they should all be the same right? Lets see how accurate they are! I’ll run the test for a week (probably longer) with various conditions and then report back.

oh and some code….

// origional code from ladyada, public domain
#include "DHT.h"

DHT dht0(2, DHT11);
DHT dht1(3, DHT11);
DHT dht2(5, DHT11);
DHT dht3(6, DHT11);
DHT dht4(7, DHT11);
DHT dht5(8, DHT11);
DHT dht6(9, DHT11);
DHT dht7(A0, DHT11);
DHT dht8(A1, DHT11);
DHT dht9(A2, DHT11);
//older sensors
DHT dht10(A4, DHT11);
DHT dht11(A5, DHT11);

float temps[12];
float hmids[12];

void setup() {
Serial.println("DHT test!");

void loop() {
// Wait a few seconds between measurements.

hmids[0] = dht0.readHumidity();
temps[0]= dht0.readTemperature();

hmids[1] = dht1.readHumidity();
temps[1]= dht1.readTemperature();

hmids[2] = dht2.readHumidity();
temps[2]= dht2.readTemperature();

hmids[3] = dht3.readHumidity();
temps[3]= dht3.readTemperature();

hmids[4] = dht4.readHumidity();
temps[4]= dht4.readTemperature();

hmids[5] = dht5.readHumidity();
temps[5]= dht5.readTemperature();

hmids[6] = dht6.readHumidity();
temps[6]= dht6.readTemperature();

hmids[7] = dht7.readHumidity();
temps[7]= dht7.readTemperature();

hmids[8] = dht8.readHumidity();
temps[8]= dht8.readTemperature();

hmids[9] = dht9.readHumidity();
temps[9]= dht9.readTemperature();

hmids[10] = dht10.readHumidity();
temps[10]= dht10.readTemperature();

hmids[11] = dht11.readHumidity();
temps[11]= dht11.readTemperature();

Serial.print("Temps ");
for (int i=0;i<12;i++)
Serial.print(", ");

Serial.print("humids ");
for (int i=0;i<12;i++)
Serial.print(", ");

//write data to SD card
File dataFile = SD.open("dhtdata.txt", FILE_WRITE);
if (dataFile) {
dataFile.print("Temps, ");
for(int i=0;i<11;i++)
dataFile.print(", ");

dataFile.print("Humids, ");
for(int i=0;i<11;i++)
dataFile.print(", ");



How much of a table is in use?

I’ve been working on a web based monitoring tool for a while now and it’s getting close to completion. A thought crossed my mind, and be prepared it’s a wild one.

WHAT IF: my site get’s so popular I hit the upper limits of the index or field size of the primary keys of the tables?

Well clearly – it’d break and then i’ld get am email or something telling me to fix it and I would go and change the field types to allow more… problem solved…. but it’s not very nice! and it means downtime! (ironically not something I want for a tool designed to measure downtime!)

Anyway – I’m building in monthly, weekly and daily email routines to let me know certain bits of information regularly without me having to log in, and I thought why not add something that tells me how much I have used? So I did.

Here’s my solution (but not my code):

get a list of tables,
for each table,
get table name,
get primary key,
get field type and convert to human friendly numbers,
get number of rows in use,
work out usage in %,

Half an hour using google and a bit of common sense I now have a page that shows me the stats of the primary key usage and now I can add them to the email routines.