So it should seem pretty obvious that running a wallet cli instance against a remote node will be slower than running it against a local one (same machine even). But what’s the difference?
So i’ve set about finding out by creating 2 wallets, one on Loki blockchain and the other on Graft and then timing the rescan_bc command against each. This rescans the whole blockchain for transactions. So I set up a script to spit out the time, do the scan, then spit out the time. For the first run’s I ran these again hashvaults public nodes, and for the second runs again local nodes I created myself running each with 2*5650 Xeons, 16GB RAM, 80GB SATA I drives. Nothing special going on optimisation wise, programs compiled directly from their githubs without modification.
Loki: 33s remote – 12s local
Graft: 74s remote – 10s local
So today a coin is worth around 15p and electricity is 14p/kw/hr.
So first of all lets deal with the money side. I’m generating 900H/s using 365j/s (w) of energy. Scaling this upto a day’s hashing, thats 77MH/day and earns around 26ETN/day. It uses 8.76Kw/day which is £1.22/day. So the ETN is worth £3.90 and it costs £1.22 to generate. That’s a profit of £2.68 per day. Winner winner, chicken dinner.
So the hardware I’m using…. 2 old desktops and 2 old laptops.
PC 1 – Intel Core2 Quad Q9300 maybe? (has VM’s running on it which are more important than mining). Has a 750Ti installed. No AES NI support on proc.
PC 2 – Intel Core2 Duo (doesn’t do anything other than host a GPU (with the CPU mining also). Has a 750Ti installed. No AES NI support on proc.
Laptop 1 – Intel quad core? Has AES NI support.
Laptop 2 – Intel quad core? Has AES NI support.
Given that this was all old hardware, i’m not seeing any negative impacts of running this, and the extra heat is useful this time of year I think it’s a thumbs up for now.
So I’ve been working for a few weeks on this idea, and now I kinda have a working prototype. The idea is for a web based copy of something like the Gnome System Sonitor utility found in many Linux distro’s. I need it web based so I can install and monitor web server I don’t have root access (shared hosting platforms) while some rather intense scripts run. Here’s a quick clipping of my very prototype, prototype. Black is actual and red is 3 sample moving average. I admit it looks nothing like the Gnome System Monitor, but it’s a step towards what I want to achieve.
Things that I need to do next;
Change the PHP backend to a JSON responder, (think like AJAX)
Improve the p5.js front end to actually look and feel more like Gnome System Monitor.
So recently I’ve noticed that the site is struggling quite badly to process the data – the database has grown to just shy of 2.5GB and well it’s not optimised is any way at all.
So there’s 95173 teams from my last data collection and of those 2158 have a score of 0 and 164 have no WU. IE these are unused teams. So in total over the last 5 months, that’s generated an extra ~300k of history records that are not needed at all. Then there are the teams who are not active, also having many records that I don’t really need to worry about creating.
To put things into context – this is a pruning exercise. The DB is too big, so the useless data needs to go. Given that in the 5 months my team has risen to the top 3000 teams with less than 2 machines running for that time, I don’t think this will affect the data validity or usefulness of the site long term.
So to keep the ship from sinking I think it’s time we removed the teams with no WU or Score from the history (after creating a backup). When I say no WU or score I actually mean all teams with less than 10 WU and less than 200 score.
So “SELECT * FROM`teamhistory`where score < 200″ yields 974,625 records and “SELECT * FROM`teamhistory`where wu < 10″ yields 4,459,439 records. Given there’s around 14,000,000 records, removing 30% of them should see a massive speed up in future queries.
So after deleting some records, the table seems to have shrunk, but I think I will have to review how I keep team history in the future and the size of the fields in the table as it has grown way too fast from January till now.
So this week I’ve been working on two LED projects, one of them is more just a thing I want to do for some pub gigs and that and the other might develop into a sellable product.
So number 1:
Recycling some old tape into LED panels for lighting bands and stuff at pub gigs, in my workshop for making videos and maybe on gigs ect. It’s a somewhat limited design but, small panels with upcycled LED tape stuck to them, controlled by some LED drivers that I’ve had lying about for some time now doing nothing. After an hour of fiddling, I proved the concept for my needs and I will now order some connectors to make 6 panels for further experimenting.
prototype LED panel
and number 2:
So it’s a 5 by 5 PixelPanel, using some recycled ws2801+5050 LED’s, and my idea is to make these panels controlled by Artnet and use PoE to power them. Still very prototype at the moment! Sorry for the crap photo.
So I turn up – hop in a lift upto the 2nd floor – the lift has a CAN bus error, still working tho :s. Then I proceed to get my badge, the printer has a jam, and the next kiosk just refuses my barcode…. off to see a real person it is then. so far the things are letting down the team.
So I have a little wonder round, most cubicles are bland and blank, a few were empty, but had signs up for exhibitors who I guess just show up. I was looking for something new, different or simply cleaver, but I don’t really think I saw anything. I was really surprised not to see Arduino there, or the Raspberry PI Foundation, or AWS…. but Ubuntu was which is ok I guess. Anyway it really did feel fragmented and nothing really flowed.
Here’s the things I found interesting enough to snap.
A robotic art, controlled by audio not the internet