(February 2016) Monthly WIP screenshot thread

Well, right now I’m using neither… but I’ve had RAID0 in my main system for 5 years (and another 5 before that in the previous system). My interim backups are all RAID1 dedicated NAS.

“2x greater chance of data loss” sounds really bad except that a) it’s 2x a very small chance and b) it’s not really 2x. A drive may fail. If I had one drive I’d be screwed. If it’s two drives in RAID0, I’m still screwed. One drive is not more likely to fail by being in proximity of the other. So really, my probability of data loss is closer to that of one drive failing. Yes, a bit more because I’ve spread the odds but ‘double’ is probably overstating it. Usually, mean failure probability doesn’t work that way. (Actually, I guess the math says it’s closer to 2x the better the failure rate but still.)

Now… the odds that I’d be able to recover data from a partially bad drive is much greater without RAID0 and virtually nil with it. It’s a risk for speed and encourages one to backup often or keep important things in source control.

It is

chance for drive1 failing + chance for drive 2 failing so it is added.
(and multiplied for both drives failing if you like statistics)

Anyway a ssd is helping quite much in everyday stuff, and a normal hdd /nas for backup.

I run 2 sd and 2 hdd in my server, one of the hdd is already down after 3 years, the ssds are both fine, even. And I abuse them for ~ 10gb used swap space already, so pretty much full writes.

Btw a hdd is also limited, not in writes but in hours, as the mechanic stuff is wearing out till the disk some day cannot spin up anymore.

Western Digital by any chance?

Only if you let them spin down. :slight_smile: I have hard drives here still going after 10 years. I remember one six year old hard drive that failed to spin up once off… you had to pick it up and drop it on a table to break it loose. Apparently the lubricant organizes itself into chains over time and will freeze the drive up when it stops. Encouraged one to make good backups.

At any rate, I said it was irrational. I won’t even tell you how long I let the water cooling unit sit next to my PC before I finally was comfortable enough running water into my case. (Years ago… I don’t water cool anymore.)

I may upgrade it at some point to an SSD for the main drive. Because I have multiple drives in it then that’s a little less painful. Maybe then I can finally RAID0 them, too.

This is even worse,

you might have several drives running, that once you have a powerout, will never again spin up, because as soon as the bearing gets cold again they will become solid.
This is why I at least yearly do a backup then a powerdown and start again, to select those out before it becomes critical.

I never was compfortable enough for a water cooling, instead I have a 1kg cooler on the cpu.

Yeah, it’s not added, it’s multiplied. The chance of two drives failing is not double… it’s only close for low failure rates. 99% of 99% is 98.1% for example. But 73% of 73% is 53%… which didn’t double the 27% failure rate.

Anyway, statistics also say that a single drive should have about a 73% failure rate after 5 years but in my experience, that’s not really the case either. A lot of these statistics are collected by folks like google who really do hammer their drives 24/7.

(Though 100% of my Western Digital drives would fail at about 3 years to the day until I stopped buying them. That was 20 years ago, though… I have a long brand-shunning memory.)

Just two things about statistics:
You are somewhat both correct:
The Probability that one drive fails is P(Drive1) + P(Drive2), hence really the double probability (Ex: When you have 10000000… drives, chances are high that one fails at some point), whereas the probability of one particular Drive failing is still the low one.

The Probability, that both drives fail is P(Drive1) * P(Drive2), hence an even lower probability that one drive to fail (Ex: All of that 100… drives fail).

That being said, I never really had a harddrive failure so far, but I am just a regular user and wouldn’t even notice some read faults I guess (Though one drive is always clicking when it starts from sleep)

Back that up… the heads are dying. Do not use it for anything critical.

I haven’t had a drive fail except through stupidity since I stopped buying WD 20 years ago. Usually I replace them for some other reason after 6 years or so… or just upgrade the system and never turn the old one on again. So, yeah, normal day-to-day usage even for a developer is not so bad.

More than anyone ever wanted to know about failure probability:
http://www.mathpages.com/home/kmath498/kmath498.htm

And a handy app for calculating RAID failure, including RAID0:
http://www.raid-failure.com/raid0-failure.aspx

And thats why you use Raid6 :wink: Cause one parity drive is not enough anymore to assume a good going recovery.

Funnily the one drive that just dies on me was a WD :slight_smile: Though it was only last year and a 1tb drive, so it was already 6years+.

The oldest drive I have still running is from 1995 for a retro gaming computer :wink:

That one will probably outlast everything else you mention anyway XD Different quality that days.

@pspeed You’re bashing WD, what do you recommend instead? I use two WDs and they run fine (albeit only for a few months since I bought them).

Also, I recommend against Toshiba drives – they fail after the slightest knock which is obviously very bad for a laptop (or even a desktop if it is within shot of something bumping against it).

This is what we can call an epic off subject !

:smile:

1 Like

I buy Seagate. Others hate Seagate. My hatred of WD is based on experiences from 1995… so if you have good luck with WD then keep using them. I did Maxtor for a while after WD and killed two of those drives through stupidity. I haven’t killed a Seagate yet that I can recall… but I do have a stack of them that I’ve pulled from machines as I upgraded them. They worked fine at the time and I labeled them with a Sharpee and put them in antistatic bags in a stack “just in case” I ever wanted something off of them again. :smile:

Indeed.

When did you say that you were sharing with us all that great assets? xD

I guess, when you’ve shared assets that look as good as these! :chimpanzee_winktongue:

-O- that’s mean :chimpanzee_cry: , I can’t even make a line in blender xD.

The 20 years old drive is a toshiba in a toshiba 430 :wink:

I had a Toshiba laptop. I put it down in a table with reasonable force. The hard drive began smoking and making terrible sounds. I replaced it with a new Toshiba hard drive. Most of it’s sectors went bad within about 3 months. I replaced it with a WD, no issues since.

Hi man,
Nice work ! But I think you forgot to include a license with your code, try to add one so I can take a look at it and improve it if I have some time. :smile:

I believe it’s a matter of luck and batches. I keep my stuff in a RAID6 and when I built it I bought three Seagate Green (I wanted them low power) in the course of three years or less they all died one by one (mechanical problems judging by the noises) luckily I had time to replace them as soon as each one died. I bought WD Reds this time hopping they last longer.
I don’t hate Seagate, I’m just not buying them for a while :smile:

Hey, thanks for the feedback.
Well I put the code on Github just because I didn’t know what to do with it. It’s really quite messy and I myself feel horrified about some things I put on this project; I implemented my custom geometry batching system but it’s a caveman job; the console with custom commands is just a toy, I’d like it to be a Groovy console with fancy scripting support; the Materials management is far from optimal. I have plenty of ideas for it and little time to implement them.
But the algorithms are there and they work. I took inspiration from the Cube2 engine, and I think some things can be done in a different way.
Anyway, I put it under the BSD license, the same of JMonkeyEngine, so feel free to take a look :wink: