ran out of RAM…. so…
So a year ago I brought one of these….
It’s a PCI expander based around an Asmedia 1184a chip. It’s a PCIe switch 1 in 4out all PCIe-X1 links to put it simply.
My original plan was to attach it to an old Fujitsu Siemens Esprimo E5925, that I had running some VM’s (Core Quad, 8GB RAM and an 128GB SSD). But it didn’t like it at all – Gave me a POST error. At this point I should have done some research – but I didn’t, and I tried in another workstation (6 months later).
The workstation was a botched together old RM server tower case, with a PSU from who knows where, with a HP xw4600 motherboard, a core duo processor and some RAM (who cares). It’s running quite nicely heating an outbuilding and doing some mining on some old GTX 750/1050 graphics cards. (The electricity would be used to heat the space wither way, so may as well mine some coins.) I tried it in this – same thing, a POST error – so I guess this doesn’t like it either. Oh well.
Move on another 6 months. The board is still in my workshop taking up space, so I thought i’ld actually read up on these things. Apparently it’s a PCIe version thing. The Fujitsu and the HP both being either older versions of PCIe or BIOS not supporting all of the v2 features (I think that’s when PCIe switches became a thing). Just because a standard says they support things, doesn’t mean it gets implemented. Anywho. I have many servers in the form of blades that should support it but unfortunately I don’t have any PCIe slots to plug into (anyone fancy getting me a Dell M610x blade?). But I did recently buy a new (second hand) computer for a business purpose. I quickly poped the lid off, plugged a few things in and it seems to just work fine.
Great – So I think it’s time to finally put this to use. I know it works in a Dell Optiplex 7020 – so I guess it’s time to buy one of those and retire the Fujitsu’s, the hobbled together machine and converge these and all the VM’s running on them into one machine. (Might even same some $$$ on the elec.)
So I have a UniFi VM, a VPN VM, a PBX VM and some VM’s for managing other things (dev mainly). Currently they all run on a only Lenovo X230 I picked up cheap off eBay, which now has 8GB of RAM and a 128GB SSD… that I stole from the Fujitsu when it may have died a miserable death. Running these VM’s currently costs 16w of electricity – which is about £16 a year (cheaper and faster than running in AWS!!!).
A Dell 7020 MT (mini tower (mATX?)) looks like a good option for me right now. 4 core CPU, 8GB of RAM (maybe pushing for 16GB) and then I can whack an SSD and a large spinning rust in also. But – seems a bit of a waste of my old case…
Well actually i’m buying a Dell 7010 instead. Fingers crossed it supports the PCIe switch – not a huge problem if it doesn’t though. (Chipset is the Q77, which has the same PCIe variants as the Q87 in the 7020 I tested.)
So it should seem pretty obvious that running a wallet cli instance against a remote node will be slower than running it against a local one (same machine even). But what’s the difference?
So i’ve set about finding out by creating 2 wallets, one on Loki blockchain and the other on Graft and then timing the rescan_bc command against each. This rescans the whole blockchain for transactions. So I set up a script to spit out the time, do the scan, then spit out the time. For the first run’s I ran these again hashvaults public nodes, and for the second runs again local nodes I created myself running each with 2*5650 Xeons, 16GB RAM, 80GB SATA I drives. Nothing special going on optimisation wise, programs compiled directly from their githubs without modification.
Loki: 33s remote – 12s local
Graft: 74s remote – 10s local
So clearly local wins – but it’s still slow.
The size of the blockchains are:
Loki 11G, Graft, 19G.
BTC 230G, LTC 21G, XMR 64G, ETN 37G.
I should probably create backup or snapshots so I don’t have to resync ever again as it takes an age.
sudo apt-get install dirmngr
sudo apt-key list
ubuntu’s key servers – you should verify these yourself…..
sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys E0B11894F66AEC98
sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys 7638D0442B90D010
sudo apt-key adv –keyserver keyserver.ubuntu.com –recv-keys 8B48AD6246925553
Debians Key servers
apt-key adv –keyserver keyring.debian.org –recv-keys 0x1827364554637281
sudo nano /etc/apt/sources.list
deb http://ftp.debian.org/debian stretch-backports main (contrib nonfree)
sudo apt-get update
now install – need to tell it what to install….. and from where
sudo apt-get install -t stretch-backports cacti
So today a coin is worth around 15p and electricity is 14p/kw/hr.
So first of all lets deal with the money side. I’m generating 900H/s using 365j/s (w) of energy. Scaling this upto a day’s hashing, thats 77MH/day and earns around 26ETN/day. It uses 8.76Kw/day which is £1.22/day. So the ETN is worth £3.90 and it costs £1.22 to generate. That’s a profit of £2.68 per day. Winner winner, chicken dinner.
So the hardware I’m using…. 2 old desktops and 2 old laptops.
PC 1 – Intel Core2 Quad Q9300 maybe? (has VM’s running on it which are more important than mining). Has a 750Ti installed. No AES NI support on proc.
PC 2 – Intel Core2 Duo (doesn’t do anything other than host a GPU (with the CPU mining also). Has a 750Ti installed. No AES NI support on proc.
Laptop 1 – Intel quad core? Has AES NI support.
Laptop 2 – Intel quad core? Has AES NI support.
Given that this was all old hardware, i’m not seeing any negative impacts of running this, and the extra heat is useful this time of year I think it’s a thumbs up for now.
So I’ve been working for a few weeks on this idea, and now I kinda have a working prototype. The idea is for a web based copy of something like the Gnome System Sonitor utility found in many Linux distro’s. I need it web based so I can install and monitor web server I don’t have root access (shared hosting platforms) while some rather intense scripts run. Here’s a quick clipping of my very prototype, prototype. Black is actual and red is 3 sample moving average. I admit it looks nothing like the Gnome System Monitor, but it’s a step towards what I want to achieve.
Things that I need to do next;
- Change the PHP backend to a JSON responder, (think like AJAX)
- Improve the p5.js front end to actually look and feel more like Gnome System Monitor.
My prototype code is available here; https://github.com/crazy-logic/webserver-system-monitor
Design note: want it to be JSON so that a user can mod the front end to monitor more than one system, and it seems sensible.
Here’s some commands (see video for more instructions);
sudo nano /lib/systemd/system/x11vnc.service
After=display-manager.service network.target syslog.target
ExecStart=/usr/bin/x11vnc (your switches)
systemctl enable x11vnc
systemctl start x11vnc
and if you want to see if it’s working ok try
systemctl status x11vnc
So recently I’ve noticed that the site is struggling quite badly to process the data – the database has grown to just shy of 2.5GB and well it’s not optimised is any way at all.
So there’s 95173 teams from my last data collection and of those 2158 have a score of 0 and 164 have no WU. IE these are unused teams. So in total over the last 5 months, that’s generated an extra ~300k of history records that are not needed at all. Then there are the teams who are not active, also having many records that I don’t really need to worry about creating.
To put things into context – this is a pruning exercise. The DB is too big, so the useless data needs to go. Given that in the 5 months my team has risen to the top 3000 teams with less than 2 machines running for that time, I don’t think this will affect the data validity or usefulness of the site long term.
So to keep the ship from sinking I think it’s time we removed the teams with no WU or Score from the history (after creating a backup). When I say no WU or score I actually mean all teams with less than 10 WU and less than 200 score.
So “SELECT * FROM `teamhistory` where score < 200″ yields 974,625 records and “SELECT * FROM `teamhistory` where wu < 10″ yields 4,459,439 records. Given there’s around 14,000,000 records, removing 30% of them should see a massive speed up in future queries.
So after deleting some records, the table seems to have shrunk, but I think I will have to review how I keep team history in the future and the size of the fields in the table as it has grown way too fast from January till now.
This is an awesome overview of the broad contents and variance of mathematics by Dominic Walliman
So this week I’ve been working on two LED projects, one of them is more just a thing I want to do for some pub gigs and that and the other might develop into a sellable product.
So number 1:
Recycling some old tape into LED panels for lighting bands and stuff at pub gigs, in my workshop for making videos and maybe on gigs ect. It’s a somewhat limited design but, small panels with upcycled LED tape stuck to them, controlled by some LED drivers that I’ve had lying about for some time now doing nothing. After an hour of fiddling, I proved the concept for my needs and I will now order some connectors to make 6 panels for further experimenting.
and number 2:
So it’s a 5 by 5 PixelPanel, using some recycled ws2801+5050 LED’s, and my idea is to make these panels controlled by Artnet and use PoE to power them. Still very prototype at the moment! Sorry for the crap photo.