WEBVTT

00:00:07.120 --> 00:00:14.160
So, I just had a bit of a scare, guys. I thought Murphy's Law really had it in

00:00:11.599 --> 00:00:18.800
for me today. This is the stack of eight 3 TBTE hard drives that I'm using to

00:00:16.240 --> 00:00:23.600
upgrade my storage server that I archive all my footage on and all of that. And

00:00:21.279 --> 00:00:28.000
uh it appeared to me not that long ago, you can see the network is critical,

00:00:25.760 --> 00:00:32.000
which is bad. It appeared to me that I had lost two hard drives at the same

00:00:29.840 --> 00:00:37.520
time, meaning that the redundancy that Windows Home Server V1 provides um was

00:00:35.280 --> 00:00:42.320
not applicable to any data that was on both of the failed hard drives. Um

00:00:40.320 --> 00:00:46.879
however, it looks like upon further inspection, the Western Digital 1 TB

00:00:44.640 --> 00:00:53.120
Black that's in there is back up and running and only the Hitachi right here

00:00:49.680 --> 00:00:55.120
is actually dead. So, um, that was a

00:00:53.120 --> 00:00:59.760
real relief because it means that I do have to, uh, repair the network or the,

00:00:58.399 --> 00:01:06.080
uh, the backup database in all likelihood. Oh, no. It looks like the backup database is okay. Awesome. Those

00:01:03.280 --> 00:01:09.799
were on the WD. Um, but I do have to, uh, I do have to remove the Hitachi

00:01:08.240 --> 00:01:16.240
drive at some point. So, yeah, that's

00:01:13.720 --> 00:01:20.000
um, very disappointing and very frustrating because it takes a few hours

00:01:18.000 --> 00:01:24.560
to get that done. And I was really hoping to get these new drives in there

00:01:22.400 --> 00:01:29.119
and get them uh get them rated up and get Windows Home Server V2 on here. I

00:01:26.560 --> 00:01:33.280
had to This is just packaging for the new drives. Had to pull out some

00:01:31.600 --> 00:01:37.040
existing drives. This one failed a little while ago. The Wildfire was just

00:01:35.119 --> 00:01:44.560
in there for testing purposes. There's a few Seagates. These are old Seagates.

00:01:40.840 --> 00:01:47.280
7200.10s. All of them survived. Um which

00:01:44.560 --> 00:01:53.520
is a testament to these particular drives, I guess. two 320s and a 250. So,

00:01:49.840 --> 00:01:56.320
those are kicking it. Well, kicking

00:01:53.520 --> 00:02:00.719
back, not working anymore. Well, they work, but they're not going to continue

00:01:58.000 --> 00:02:04.399
to work for me. Um, whatever. You guys get the point. So, this Hitachi is going

00:02:02.399 --> 00:02:07.920
to come out. Uh, that WD is going to move slots, and I'm going to be putting

00:02:06.240 --> 00:02:11.039
Oops, sorry. I'm going to be putting the eight new drives in the eight bays at

00:02:09.840 --> 00:02:15.120
the bottom. And then I'm going to be going with uh, you know what? Maybe I'll

00:02:12.959 --> 00:02:19.200
throw the Wildfire in. It's got toggle nan, so it should be pretty reliable for

00:02:17.280 --> 00:02:23.840
the uh boot drive of Windows home server v2 or 2011, whatever you guys want to

00:02:21.680 --> 00:02:27.680
call it. Veil. Um, it's been brought to my attention that you don't have to use

00:02:25.440 --> 00:02:31.200
a 240 gig drive. You can get away with a 120 with a little edit during the

00:02:29.760 --> 00:02:34.879
installation process. So, that's a really good thing. And, uh, oh yeah,

00:02:33.360 --> 00:02:38.879
right. The kicker for all of this was that when the two drives were out, it

00:02:36.879 --> 00:02:46.400
told me the backup database was failed. And um I actually just bricked the OS of

00:02:43.760 --> 00:02:52.000
my wife's computer and was about to use the home server backup restore utility

00:02:49.760 --> 00:02:55.040
to get her computer back up and running. So I thought I had lost pretty much

00:02:53.440 --> 00:02:59.200
everything. But now that that one drive is working, I'm in uh pretty good shape.

00:02:57.360 --> 00:03:02.560
So thanks for coming along for the ride, guys. And I'll keep you posted on my

00:03:00.400 --> 00:03:05.920
Windows home server upgrade. Don't forget to subscribe to Linus Tech Tips

00:03:04.080 --> 00:03:11.440
for more unboxings, reviews, and other excuse me, other computer videos.

00:03:09.440 --> 00:03:16.560
So, none of the new drives got detected at all. They're all detecting as zero

00:03:13.680 --> 00:03:21.200
gigabytes, which um stands to reason since the firmware I'm running on my

00:03:18.080 --> 00:03:23.760
controller is older than my cats. So,

00:03:21.200 --> 00:03:28.440
I'm uh updating the firmware. All I got to do apparently is

00:03:25.879 --> 00:03:30.760
this.

00:03:30.760 --> 00:03:37.280
And apparently that didn't work. So, I'll

00:03:35.920 --> 00:03:43.200
give it another crack. I'll get it I'll get it updated and then we'll see how things go once we get booted into back

00:03:40.799 --> 00:03:50.040
into Windows and create the array. I think I'm going to go with a RAID

00:03:45.959 --> 00:03:52.319
6. That worked. The file name just got

00:03:50.040 --> 00:03:57.760
truncated. So, this is my first boot after updating the firmware. This is

00:03:54.799 --> 00:04:03.040
new. I hope that's a good sign. All right. So, I'm into my RAID

00:04:00.200 --> 00:04:08.480
configuration. Physical drives. Let's see if they Oh, they are detected now.

00:04:06.799 --> 00:04:12.280
All right. So, I guess we might as well do a quick tutorial on how to create a

00:04:11.200 --> 00:04:18.400
RAID volume on an Ara RAID card. So, we're

00:04:16.000 --> 00:04:21.320
going to call this uh

00:04:21.320 --> 00:04:25.919
RAID six. I don't have another RAID six.

00:04:24.720 --> 00:04:32.040
All my other drives are just pass through drives, which just means they're

00:04:28.000 --> 00:04:36.040
standalone drives. RAID set was created

00:04:32.040 --> 00:04:36.040
successfully. Cool.

00:04:38.040 --> 00:04:44.759
Um, so let me

00:04:41.240 --> 00:04:48.960
see. Okay, so I could expand

00:04:44.759 --> 00:04:50.800
it. I could hm activate incomplete raid

00:04:48.960 --> 00:04:56.560
set. I guess that's pretty much it. I could create hot spares. I can rescue

00:04:53.040 --> 00:04:56.560
raid sets. Delete hot

00:04:57.000 --> 00:05:02.800
spares. Oh, neat. That's actually not a bad idea. I should probably use one as a

00:05:01.360 --> 00:05:06.639
hot spare since I don't really need all the capacity to go uh to go with it

00:05:05.280 --> 00:05:12.440
right now. So, what a hot spare will do is if a drive fails, it'll automatically

00:05:09.280 --> 00:05:15.240
go right in and rebuild the

00:05:12.440 --> 00:05:21.280
uh rebuild the array.

00:05:17.560 --> 00:05:21.280
So, let's have a

00:05:21.880 --> 00:05:29.680
look at the actually Oh, no, not this one. Sorry.

00:05:26.400 --> 00:05:29.680
Let's have a look

00:05:29.880 --> 00:05:36.960
at the volume that has just been created. Disc management. Here we

00:05:41.400 --> 00:05:45.199
go. Uh,

00:05:46.840 --> 00:05:50.800
refresh. Rescan discs

00:05:50.919 --> 00:05:59.400
maybe. There it is. No, wait. That's not it. 20 gigs. Oh, let's see if we can

00:05:56.960 --> 00:06:03.840
find it. I wonder if the OS is even compatible. Haven't done this in a

00:06:01.440 --> 00:06:10.680
while. So, uh, for one thing, I screwed up when I created it, and I accidentally

00:06:06.080 --> 00:06:14.000
created it with only, uh, seven drives.

00:06:10.680 --> 00:06:15.919
So, okay, there we go. Now, it has

00:06:14.000 --> 00:06:20.000
member discs, eight out of eight. Now, we have to create a volume set. So, we

00:06:18.160 --> 00:06:26.800
create select the RAID set to create a volume set. Then we make a volume name

00:06:22.880 --> 00:06:29.840
and we're going to call it RAID six

00:06:26.800 --> 00:06:31.680
again. Okay. Volume RAID level. This is

00:06:29.840 --> 00:06:36.600
where we can actually edit the uh change the RAID level. Volume capacity maximum

00:06:35.280 --> 00:06:40.720
18 terab. Excellent.

00:06:41.319 --> 00:06:48.560
Um yeah, these are

00:06:44.919 --> 00:06:50.160
4K. Foreground initialization should be

00:06:48.560 --> 00:06:54.170
fine. Let's go with default for all this

00:06:52.840 --> 00:06:56.199
stuff.

00:06:56.199 --> 00:07:02.840
Okay. Volumes to be created

00:06:59.400 --> 00:07:06.639
one. Here we go. Volume set has been

00:07:02.840 --> 00:07:09.199
created. Now we should be able to see it

00:07:06.639 --> 00:07:09.199
in disk

00:07:10.919 --> 00:07:16.620
management in theory.

00:07:18.560 --> 00:07:23.520
theories don't always work out that way. Give me a bit. Ah, yes, it's

00:07:21.759 --> 00:07:27.759
initializing. I'll be back once it's done. That takes a while. All right,

00:07:25.039 --> 00:07:32.440
there we go. It is in a RAID state normal now, which means that I can go

00:07:30.319 --> 00:07:38.880
ahead and disc management. Aha, welcome to the

00:07:36.240 --> 00:07:45.199
initialize and convert disc wizard. Next, disk 16

00:07:42.199 --> 00:07:46.120
initializing. Finish.

00:07:45.199 --> 00:07:52.479
So there is my

00:07:48.599 --> 00:07:54.160
16 terbte volume which has been split up

00:07:52.479 --> 00:07:58.639
and I forget how this works. Yeah, we need to convert to a GPT disc so that we

00:07:56.800 --> 00:08:04.400
can make it the full size instead of being limited.

00:08:01.560 --> 00:08:10.520
So can be only be accessed from Windows server blah blah blah. Okay, got it.

00:08:08.960 --> 00:08:17.440
Primary partition assign X for

00:08:14.440 --> 00:08:17.440
extreme.

00:08:17.560 --> 00:08:26.000
Next grade. Whoops. Apparently I have caps

00:08:21.919 --> 00:08:27.960
lock on already. Raid six. Perform a

00:08:26.000 --> 00:08:33.039
quick format.

00:08:30.440 --> 00:08:34.580
Finish. Because the cluster count is higher than

00:08:35.959 --> 00:08:40.000
expected. That's interesting.

00:08:43.880 --> 00:08:50.480
Yeah. Why don't we try not a quick Oh, that's going to take forever. Let's try

00:08:48.000 --> 00:08:54.920
one more time. Oh, okay. Well, let's see if I can figure this out now. Got to

00:08:52.320 --> 00:09:00.560
love extreme hardware. Always just works. Found a great uh article on the

00:08:58.560 --> 00:09:06.160
support site for Microsoft for the default cluster sizes for NTFS. And it

00:09:03.279 --> 00:09:11.279
looks like even though my volume is greater than 16 terabytes, it is not

00:09:09.200 --> 00:09:14.240
defaulting to 8 kilobytes. So, as soon as I recreate

00:09:17.240 --> 00:09:24.959
it using an 8

00:09:20.839 --> 00:09:27.800
kilobyte, here we go. Setting. We should

00:09:24.959 --> 00:09:35.040
be able to get access to the drive formatting and healthy local disc

00:09:32.000 --> 00:09:36.480
X. There we go. So, now let's run a

00:09:35.040 --> 00:09:42.399
quick benchmark and find out how fast this RAID 6 is. Now, we've all been

00:09:38.720 --> 00:09:45.360
spoiled by SSDs when it comes to huge

00:09:42.399 --> 00:09:48.680
ADO scores, but uh I'm still pretty optimistic so far. Looking at this guy

00:09:47.839 --> 00:09:54.000
right here, holy

00:09:51.080 --> 00:10:01.680
cow, we've already reached 1 Gigabyte per second in sustained reads at

00:09:57.560 --> 00:10:04.240
16K. At 30K, we're up to over 1.2

00:10:01.680 --> 00:10:08.399
gigabytes per second reads. The rights are slower because we're going to be

00:10:05.680 --> 00:10:13.040
controller limited on those. So, as fast as your RAID controller is is as fast as

00:10:10.320 --> 00:10:18.360
you can write to a RAID six. Um, a RAID five would be faster on the rights. Holy

00:10:15.680 --> 00:10:22.680
smokes. We're up over 1.6 gigs per second. And it looks like that's

00:10:20.640 --> 00:10:27.920
probably where we're going to peak. So,

00:10:24.680 --> 00:10:29.920
wow. 1.5 on that one. So, so yeah, we

00:10:27.920 --> 00:10:36.320
peak at around 700 megs per second, right? and around 1.5 or one pix gigs

00:10:34.000 --> 00:10:39.519
per second read. Just ridiculous. Okay, I'll be back once that the benchmarks.

00:10:38.000 --> 00:10:44.800
So, there you go, guys. That's what we ended up with. Uh, now let's do another

00:10:41.760 --> 00:10:46.160
run at a deeper Q depth. So, that should

00:10:44.800 --> 00:10:51.839
give us some interesting results. These are just staggering staggering numbers

00:10:48.720 --> 00:10:54.079
for a mechanical setup. Well, not much

00:10:51.839 --> 00:10:58.160
of an impact on scores. I am curious though to see how this array performs in

00:10:56.399 --> 00:11:03.279
RAID five as opposed to RAID six. So, I'll try RAID 5 with a hot spare, which

00:11:00.560 --> 00:11:07.040
basically gives similar data protection to RAID six because you could have two

00:11:05.040 --> 00:11:10.640
drives fail as long as they don't fail at exactly the same time and the hot

00:11:08.720 --> 00:11:14.959
spare would swoop right in and take over for the one that failed, whereas RAID 6

00:11:12.320 --> 00:11:18.079
can take two failures at the same time. I just want to see how much of a

00:11:16.560 --> 00:11:21.519
performance difference we see in these right performance in these right

00:11:19.920 --> 00:11:26.399
performance numbers with RAID 5. So, thank you for checking out this little

00:11:23.040 --> 00:11:29.040
RAID 6 experiment and uh stay tuned for

00:11:26.399 --> 00:11:32.800
more on my Windows home server upgrade. Don't forget to subscribe to Linus Tech

00:11:30.720 --> 00:11:32.800
Tips.
