Heroku’s uptime for June under their rules is 99.28% but really it’s 96.25%.
I like how Heroku added an info icon which links to a page explaining their modified uptime numbers. They added in the number of running applications affected in each outage. There are a few problems with this.
Firstly, idle applications do not count but unfortunately during many of these outages - idle applications can’t come online. I don’t believe I’ve heard of a service that said our site’s availability was 100% because you weren’t logged in right before the event happened. But for Bob who was logged in at the time, his uptime was 99.28%. Additionally I’ve not seen uptime reports take into consideration the number of accounts.
Secondly, how do they count the number of applications affected and how are they sure? During last month’s outage, I had several clients have their sites available but degraded and slow. Some sites were available one minute, gone the next and then back again. How do they count these?
At another company I recently worked with, we had several Heroku applications working in tandem in an SOA configuration. Such an event may have affected one application in some manner which would then affect the entire system. There’s no way to calculate that either.
So Heroku lists their uptime for June as 99.28% with all these new considerations in place which is still… pretty bad.
I’ll give Heroku a pass on Varnish being down for an hour since Cedar is the default but I won’t give them the API being offline for 4m as “Development” only because many DevOps operations require the use of the API. If you can’t scale up under load, then you have a production level problem.
The uptime for production only outages for June is 96.25%. Not 99.28%. This is based on number of minutes in the month and number of minutes production was down. Simple.
Recommendation to Heroku, if the numbers are hard to understand then get rid of them. AWS doesn’t have an uptime percentage on their status page. Trust us, we know June was a bad month for you.
Too bad Heroku doesn’t have a good view of what June looked like. Oh wait, I do.
With prices of 16GB USB 3.0 drives hovering around $40, I was looking for a cheap $10 option. Basic throw away drives that I didn’t care to lose, didn’t copy to often but were really small and easy to pack.
- both 16GB USB 2.0 flash drives. Both were formatted FAT as most USB drives should work cross-platform as a simple sneaker net.
The SanDisk wins with a cumulative score of 654.6 compared to Kingston’s score of 393.43. Benchmark results available in a gist.
The SanDisk Cruzer Blade won in every single test except Sequential Uncached Read using 4K blocks and Random Uncached Writes using 256K blocks. The SanDisk achieved 4.19 MB/sec and 0.51 MB/sec, respectively. The Kingston pulled in 4.46 MB/sec and 2.70 MB/sec, respectively.
The numbers for the 4k blocks are very close but the gap in the single 256K block is fairly substantial. However, given that FAT32 is in 4k sectors - the clear winner is still the SanDisk.
Have you ever stubbed out the call to url in CarrierWave to trick out your Uploaders during specs?
Well as of 0.6.0, that’ll give you a nice error:
stack level too deep
So you’ll want to change the url method to call super (see gist).