1
00:00:07,120 --> 00:00:14,160
So, I just had a bit of a scare, guys. I thought Murphy's Law really had it in

2
00:00:11,599 --> 00:00:18,800
for me today. This is the stack of eight 3 TBTE hard drives that I'm using to

3
00:00:16,240 --> 00:00:23,600
upgrade my storage server that I archive all my footage on and all of that. And

4
00:00:21,279 --> 00:00:28,000
uh it appeared to me not that long ago, you can see the network is critical,

5
00:00:25,760 --> 00:00:32,000
which is bad. It appeared to me that I had lost two hard drives at the same

6
00:00:29,840 --> 00:00:37,520
time, meaning that the redundancy that Windows Home Server V1 provides um was

7
00:00:35,280 --> 00:00:42,320
not applicable to any data that was on both of the failed hard drives. Um

8
00:00:40,320 --> 00:00:46,879
however, it looks like upon further inspection, the Western Digital 1 TB

9
00:00:44,640 --> 00:00:53,120
Black that's in there is back up and running and only the Hitachi right here

10
00:00:49,680 --> 00:00:55,120
is actually dead. So, um, that was a

11
00:00:53,120 --> 00:00:59,760
real relief because it means that I do have to, uh, repair the network or the,

12
00:00:58,399 --> 00:01:06,080
uh, the backup database in all likelihood. Oh, no. It looks like the backup database is okay. Awesome. Those

13
00:01:03,280 --> 00:01:09,799
were on the WD. Um, but I do have to, uh, I do have to remove the Hitachi

14
00:01:08,240 --> 00:01:16,240
drive at some point. So, yeah, that's

15
00:01:13,720 --> 00:01:20,000
um, very disappointing and very frustrating because it takes a few hours

16
00:01:18,000 --> 00:01:24,560
to get that done. And I was really hoping to get these new drives in there

17
00:01:22,400 --> 00:01:29,119
and get them uh get them rated up and get Windows Home Server V2 on here. I

18
00:01:26,560 --> 00:01:33,280
had to This is just packaging for the new drives. Had to pull out some

19
00:01:31,600 --> 00:01:37,040
existing drives. This one failed a little while ago. The Wildfire was just

20
00:01:35,119 --> 00:01:44,560
in there for testing purposes. There's a few Seagates. These are old Seagates.

21
00:01:40,840 --> 00:01:47,280
7200.10s. All of them survived. Um which

22
00:01:44,560 --> 00:01:53,520
is a testament to these particular drives, I guess. two 320s and a 250. So,

23
00:01:49,840 --> 00:01:56,320
those are kicking it. Well, kicking

24
00:01:53,520 --> 00:02:00,719
back, not working anymore. Well, they work, but they're not going to continue

25
00:01:58,000 --> 00:02:04,399
to work for me. Um, whatever. You guys get the point. So, this Hitachi is going

26
00:02:02,399 --> 00:02:07,920
to come out. Uh, that WD is going to move slots, and I'm going to be putting

27
00:02:06,240 --> 00:02:11,039
Oops, sorry. I'm going to be putting the eight new drives in the eight bays at

28
00:02:09,840 --> 00:02:15,120
the bottom. And then I'm going to be going with uh, you know what? Maybe I'll

29
00:02:12,959 --> 00:02:19,200
throw the Wildfire in. It's got toggle nan, so it should be pretty reliable for

30
00:02:17,280 --> 00:02:23,840
the uh boot drive of Windows home server v2 or 2011, whatever you guys want to

31
00:02:21,680 --> 00:02:27,680
call it. Veil. Um, it's been brought to my attention that you don't have to use

32
00:02:25,440 --> 00:02:31,200
a 240 gig drive. You can get away with a 120 with a little edit during the

33
00:02:29,760 --> 00:02:34,879
installation process. So, that's a really good thing. And, uh, oh yeah,

34
00:02:33,360 --> 00:02:38,879
right. The kicker for all of this was that when the two drives were out, it

35
00:02:36,879 --> 00:02:46,400
told me the backup database was failed. And um I actually just bricked the OS of

36
00:02:43,760 --> 00:02:52,000
my wife's computer and was about to use the home server backup restore utility

37
00:02:49,760 --> 00:02:55,040
to get her computer back up and running. So I thought I had lost pretty much

38
00:02:53,440 --> 00:02:59,200
everything. But now that that one drive is working, I'm in uh pretty good shape.

39
00:02:57,360 --> 00:03:02,560
So thanks for coming along for the ride, guys. And I'll keep you posted on my

40
00:03:00,400 --> 00:03:05,920
Windows home server upgrade. Don't forget to subscribe to Linus Tech Tips

41
00:03:04,080 --> 00:03:11,440
for more unboxings, reviews, and other excuse me, other computer videos.

42
00:03:09,440 --> 00:03:16,560
So, none of the new drives got detected at all. They're all detecting as zero

43
00:03:13,680 --> 00:03:21,200
gigabytes, which um stands to reason since the firmware I'm running on my

44
00:03:18,080 --> 00:03:23,760
controller is older than my cats. So,

45
00:03:21,200 --> 00:03:28,440
I'm uh updating the firmware. All I got to do apparently is

46
00:03:25,879 --> 00:03:30,760
this.

47
00:03:30,760 --> 00:03:37,280
And apparently that didn't work. So, I'll

48
00:03:35,920 --> 00:03:43,200
give it another crack. I'll get it I'll get it updated and then we'll see how things go once we get booted into back

49
00:03:40,799 --> 00:03:50,040
into Windows and create the array. I think I'm going to go with a RAID

50
00:03:45,959 --> 00:03:52,319
6. That worked. The file name just got

51
00:03:50,040 --> 00:03:57,760
truncated. So, this is my first boot after updating the firmware. This is

52
00:03:54,799 --> 00:04:03,040
new. I hope that's a good sign. All right. So, I'm into my RAID

53
00:04:00,200 --> 00:04:08,480
configuration. Physical drives. Let's see if they Oh, they are detected now.

54
00:04:06,799 --> 00:04:12,280
All right. So, I guess we might as well do a quick tutorial on how to create a

55
00:04:11,200 --> 00:04:18,400
RAID volume on an Ara RAID card. So, we're

56
00:04:16,000 --> 00:04:21,320
going to call this uh

57
00:04:21,320 --> 00:04:25,919
RAID six. I don't have another RAID six.

58
00:04:24,720 --> 00:04:32,040
All my other drives are just pass through drives, which just means they're

59
00:04:28,000 --> 00:04:36,040
standalone drives. RAID set was created

60
00:04:32,040 --> 00:04:36,040
successfully. Cool.

61
00:04:38,040 --> 00:04:44,759
Um, so let me

62
00:04:41,240 --> 00:04:48,960
see. Okay, so I could expand

63
00:04:44,759 --> 00:04:50,800
it. I could hm activate incomplete raid

64
00:04:48,960 --> 00:04:56,560
set. I guess that's pretty much it. I could create hot spares. I can rescue

65
00:04:53,040 --> 00:04:56,560
raid sets. Delete hot

66
00:04:57,000 --> 00:05:02,800
spares. Oh, neat. That's actually not a bad idea. I should probably use one as a

67
00:05:01,360 --> 00:05:06,639
hot spare since I don't really need all the capacity to go uh to go with it

68
00:05:05,280 --> 00:05:12,440
right now. So, what a hot spare will do is if a drive fails, it'll automatically

69
00:05:09,280 --> 00:05:15,240
go right in and rebuild the

70
00:05:12,440 --> 00:05:21,280
uh rebuild the array.

71
00:05:17,560 --> 00:05:21,280
So, let's have a

72
00:05:21,880 --> 00:05:29,680
look at the actually Oh, no, not this one. Sorry.

73
00:05:26,400 --> 00:05:29,680
Let's have a look

74
00:05:29,880 --> 00:05:36,960
at the volume that has just been created. Disc management. Here we

75
00:05:41,400 --> 00:05:45,199
go. Uh,

76
00:05:46,840 --> 00:05:50,800
refresh. Rescan discs

77
00:05:50,919 --> 00:05:59,400
maybe. There it is. No, wait. That's not it. 20 gigs. Oh, let's see if we can

78
00:05:56,960 --> 00:06:03,840
find it. I wonder if the OS is even compatible. Haven't done this in a

79
00:06:01,440 --> 00:06:10,680
while. So, uh, for one thing, I screwed up when I created it, and I accidentally

80
00:06:06,080 --> 00:06:14,000
created it with only, uh, seven drives.

81
00:06:10,680 --> 00:06:15,919
So, okay, there we go. Now, it has

82
00:06:14,000 --> 00:06:20,000
member discs, eight out of eight. Now, we have to create a volume set. So, we

83
00:06:18,160 --> 00:06:26,800
create select the RAID set to create a volume set. Then we make a volume name

84
00:06:22,880 --> 00:06:29,840
and we're going to call it RAID six

85
00:06:26,800 --> 00:06:31,680
again. Okay. Volume RAID level. This is

86
00:06:29,840 --> 00:06:36,600
where we can actually edit the uh change the RAID level. Volume capacity maximum

87
00:06:35,280 --> 00:06:40,720
18 terab. Excellent.

88
00:06:41,319 --> 00:06:48,560
Um yeah, these are

89
00:06:44,919 --> 00:06:50,160
4K. Foreground initialization should be

90
00:06:48,560 --> 00:06:54,170
fine. Let's go with default for all this

91
00:06:52,840 --> 00:06:56,199
stuff.

92
00:06:56,199 --> 00:07:02,840
Okay. Volumes to be created

93
00:06:59,400 --> 00:07:06,639
one. Here we go. Volume set has been

94
00:07:02,840 --> 00:07:09,199
created. Now we should be able to see it

95
00:07:06,639 --> 00:07:09,199
in disk

96
00:07:10,919 --> 00:07:16,620
management in theory.

97
00:07:18,560 --> 00:07:23,520
theories don't always work out that way. Give me a bit. Ah, yes, it's

98
00:07:21,759 --> 00:07:27,759
initializing. I'll be back once it's done. That takes a while. All right,

99
00:07:25,039 --> 00:07:32,440
there we go. It is in a RAID state normal now, which means that I can go

100
00:07:30,319 --> 00:07:38,880
ahead and disc management. Aha, welcome to the

101
00:07:36,240 --> 00:07:45,199
initialize and convert disc wizard. Next, disk 16

102
00:07:42,199 --> 00:07:46,120
initializing. Finish.

103
00:07:45,199 --> 00:07:52,479
So there is my

104
00:07:48,599 --> 00:07:54,160
16 terbte volume which has been split up

105
00:07:52,479 --> 00:07:58,639
and I forget how this works. Yeah, we need to convert to a GPT disc so that we

106
00:07:56,800 --> 00:08:04,400
can make it the full size instead of being limited.

107
00:08:01,560 --> 00:08:10,520
So can be only be accessed from Windows server blah blah blah. Okay, got it.

108
00:08:08,960 --> 00:08:17,440
Primary partition assign X for

109
00:08:14,440 --> 00:08:17,440
extreme.

110
00:08:17,560 --> 00:08:26,000
Next grade. Whoops. Apparently I have caps

111
00:08:21,919 --> 00:08:27,960
lock on already. Raid six. Perform a

112
00:08:26,000 --> 00:08:33,039
quick format.

113
00:08:30,440 --> 00:08:34,580
Finish. Because the cluster count is higher than

114
00:08:35,959 --> 00:08:40,000
expected. That's interesting.

115
00:08:43,880 --> 00:08:50,480
Yeah. Why don't we try not a quick Oh, that's going to take forever. Let's try

116
00:08:48,000 --> 00:08:54,920
one more time. Oh, okay. Well, let's see if I can figure this out now. Got to

117
00:08:52,320 --> 00:09:00,560
love extreme hardware. Always just works. Found a great uh article on the

118
00:08:58,560 --> 00:09:06,160
support site for Microsoft for the default cluster sizes for NTFS. And it

119
00:09:03,279 --> 00:09:11,279
looks like even though my volume is greater than 16 terabytes, it is not

120
00:09:09,200 --> 00:09:14,240
defaulting to 8 kilobytes. So, as soon as I recreate

121
00:09:17,240 --> 00:09:24,959
it using an 8

122
00:09:20,839 --> 00:09:27,800
kilobyte, here we go. Setting. We should

123
00:09:24,959 --> 00:09:35,040
be able to get access to the drive formatting and healthy local disc

124
00:09:32,000 --> 00:09:36,480
X. There we go. So, now let's run a

125
00:09:35,040 --> 00:09:42,399
quick benchmark and find out how fast this RAID 6 is. Now, we've all been

126
00:09:38,720 --> 00:09:45,360
spoiled by SSDs when it comes to huge

127
00:09:42,399 --> 00:09:48,680
ADO scores, but uh I'm still pretty optimistic so far. Looking at this guy

128
00:09:47,839 --> 00:09:54,000
right here, holy

129
00:09:51,080 --> 00:10:01,680
cow, we've already reached 1 Gigabyte per second in sustained reads at

130
00:09:57,560 --> 00:10:04,240
16K. At 30K, we're up to over 1.2

131
00:10:01,680 --> 00:10:08,399
gigabytes per second reads. The rights are slower because we're going to be

132
00:10:05,680 --> 00:10:13,040
controller limited on those. So, as fast as your RAID controller is is as fast as

133
00:10:10,320 --> 00:10:18,360
you can write to a RAID six. Um, a RAID five would be faster on the rights. Holy

134
00:10:15,680 --> 00:10:22,680
smokes. We're up over 1.6 gigs per second. And it looks like that's

135
00:10:20,640 --> 00:10:27,920
probably where we're going to peak. So,

136
00:10:24,680 --> 00:10:29,920
wow. 1.5 on that one. So, so yeah, we

137
00:10:27,920 --> 00:10:36,320
peak at around 700 megs per second, right? and around 1.5 or one pix gigs

138
00:10:34,000 --> 00:10:39,519
per second read. Just ridiculous. Okay, I'll be back once that the benchmarks.

139
00:10:38,000 --> 00:10:44,800
So, there you go, guys. That's what we ended up with. Uh, now let's do another

140
00:10:41,760 --> 00:10:46,160
run at a deeper Q depth. So, that should

141
00:10:44,800 --> 00:10:51,839
give us some interesting results. These are just staggering staggering numbers

142
00:10:48,720 --> 00:10:54,079
for a mechanical setup. Well, not much

143
00:10:51,839 --> 00:10:58,160
of an impact on scores. I am curious though to see how this array performs in

144
00:10:56,399 --> 00:11:03,279
RAID five as opposed to RAID six. So, I'll try RAID 5 with a hot spare, which

145
00:11:00,560 --> 00:11:07,040
basically gives similar data protection to RAID six because you could have two

146
00:11:05,040 --> 00:11:10,640
drives fail as long as they don't fail at exactly the same time and the hot

147
00:11:08,720 --> 00:11:14,959
spare would swoop right in and take over for the one that failed, whereas RAID 6

148
00:11:12,320 --> 00:11:18,079
can take two failures at the same time. I just want to see how much of a

149
00:11:16,560 --> 00:11:21,519
performance difference we see in these right performance in these right

150
00:11:19,920 --> 00:11:26,399
performance numbers with RAID 5. So, thank you for checking out this little

151
00:11:23,040 --> 00:11:29,040
RAID 6 experiment and uh stay tuned for

152
00:11:26,399 --> 00:11:32,800
more on my Windows home server upgrade. Don't forget to subscribe to Linus Tech

153
00:11:30,720 --> 00:11:32,800
Tips.
