1
00:00:00,240 --> 00:00:06,560
it's actually kind of amazing how many wires got crossed on this project but we

2
00:00:04,640 --> 00:00:09,920
are finally doing it we are building

3
00:00:08,000 --> 00:00:13,679
petabyte project number two

4
00:00:11,519 --> 00:00:17,840
and not a moment too soon we may not even have enough space left on our

5
00:00:15,679 --> 00:00:23,600
servers to offload the footage that we are recording right now and Linus you

6
00:00:21,119 --> 00:00:28,320
might say you could just stop being such a digital hoarder and oh i don't know

7
00:00:26,080 --> 00:00:30,320
delete some bloody data but

8
00:00:29,119 --> 00:00:34,000
i actually have the perfect counter argument to that

9
00:00:32,480 --> 00:00:37,200
you sound like my wife just let me have my fun

10
00:00:35,840 --> 00:00:42,879
and we're gonna have some fun today ladies and gentlemen because um i

11
00:00:39,920 --> 00:00:49,039
accidentally have over three petabytes of hard drives

12
00:00:47,120 --> 00:00:52,800
smart deploy makes it easy to handle daily it tasks like Windows imaging

13
00:00:51,120 --> 00:00:56,160
patching updating apps and migrating user data you can do it all over your

14
00:00:54,559 --> 00:01:01,480
existing network or the cloud without leaving your desk get your free offer at

15
00:00:58,199 --> 00:01:09,520
smartdeploy.com Linus

16
00:01:09,520 --> 00:01:16,799
alright so the first problem was entirely my fault actually i told

17
00:01:14,080 --> 00:01:23,200
seagate that our goal was to show off one petabyte of usable space in a single

18
00:01:20,240 --> 00:01:27,520
4u enclosure instead of doing it in two enclosures like we did last time and i

19
00:01:25,520 --> 00:01:33,200
told them that to do that i would need 75 of their 16 terabyte hard drives to

20
00:01:31,119 --> 00:01:38,240
account for the space that we'd lose to formatting overhead and parity data

21
00:01:36,240 --> 00:01:43,840
so that's true in five zfs raid z2 arrays we would be

22
00:01:41,840 --> 00:01:48,960
able to lose up to two drives per v dev so that's up

23
00:01:46,880 --> 00:01:55,439
to a maximum of 10 of our 75 drives before we would

24
00:01:52,240 --> 00:01:57,079
actually lose any data and that would

25
00:01:55,439 --> 00:02:03,280
still yield over 950 terabytes of accessible space one

26
00:02:00,719 --> 00:02:08,560
small problem though the custom 75 drive chassis that i

27
00:02:06,159 --> 00:02:12,319
thought i asked 45 drives for they were like yeah yeah there bud your

28
00:02:10,879 --> 00:02:16,000
server's in the mailbox you're welcome bud they're eastern canadian they really

29
00:02:14,080 --> 00:02:20,080
do sound like that it's amazing and i was like so is it the 75 drive custom

30
00:02:18,720 --> 00:02:25,040
one and they're like what are you talking about there bud i apparently

31
00:02:22,400 --> 00:02:28,000
never clarified i needed 75 bays so it has

32
00:02:26,080 --> 00:02:32,000
60. so it looks like we're going to have about

33
00:02:29,160 --> 00:02:37,040
750 terabytes of usable space but hold on hold on guys the title is not

34
00:02:34,239 --> 00:02:40,560
clickbait i am still going to have one petabyte of raw capacity in here the

35
00:02:39,280 --> 00:02:44,080
difference is that we're going to make up some of that shortfall with solid

36
00:02:43,040 --> 00:02:50,080
state all right so let's take a look at the drives that seagate sent over here

37
00:02:47,599 --> 00:02:53,440
so there's actually more boxes here than i expected

38
00:02:51,760 --> 00:02:59,040
which is interesting so we're all we're all learning things

39
00:02:56,000 --> 00:03:00,720
today um this can go

40
00:02:59,040 --> 00:03:06,159
here so these are the ones that are right these

41
00:03:03,040 --> 00:03:08,959
are the ones that we can do first this

42
00:03:06,159 --> 00:03:13,840
is so hilarious oh my god there's so many of them that this is actually like

43
00:03:11,440 --> 00:03:16,400
like this will actually build up pretty high

44
00:03:16,720 --> 00:03:23,840
i'm not quite sure how it happened but this was error number two seagate's iron

45
00:03:21,680 --> 00:03:27,920
wolf nas drives are designed for network attached storage use the respect for a

46
00:03:25,920 --> 00:03:32,720
million hours mean time between failure and 180 terabytes of access per year

47
00:03:30,959 --> 00:03:36,799
they've got a three year warranty and they feature seagate's agile array a

48
00:03:34,879 --> 00:03:41,200
combination of hardware and firmware features that make them perform better

49
00:03:38,480 --> 00:03:44,400
in raid arrays they've got rv sensors and better vibration tolerance to

50
00:03:42,799 --> 00:03:49,120
improve performance and reliability in multi-drive arrays and a combination of

51
00:03:46,720 --> 00:03:53,840
solid performance and power consumption across a wide variety of workloads

52
00:03:51,280 --> 00:03:59,040
including video editing which is our primary concern around here we actually

53
00:03:56,159 --> 00:04:04,080
end up editing video directly off of the vault more often than you'd probably

54
00:04:01,200 --> 00:04:08,480
think because whether it's a big project and wanik's server has no room or

55
00:04:06,000 --> 00:04:13,760
because a sponsor wants to change after the fact or whatever the case may be so

56
00:04:10,799 --> 00:04:17,359
thank you seagate appreciate you fam these are great drives and we've

57
00:04:15,040 --> 00:04:21,199
recommended them loads of times except for one small problem i really don't

58
00:04:19,600 --> 00:04:25,759
know where the communication wires got crossed but these

59
00:04:22,639 --> 00:04:30,479
are rated for use in enclosures of up to

60
00:04:25,759 --> 00:04:30,479
eight drives at a time okay then

61
00:04:32,479 --> 00:04:38,479
so seagate then sent over a few boxes of

62
00:04:35,680 --> 00:04:44,240
their iron wolf pro drives putting us up to a total of over two

63
00:04:41,280 --> 00:04:51,680
petabytes of storage but those are also only meant to have up

64
00:04:47,600 --> 00:04:53,680
to 24 drives in an enclosure i mean

65
00:04:51,680 --> 00:04:57,520
honestly speaking i would have been perfectly comfortable with the ironwolf

66
00:04:55,440 --> 00:05:01,040
pros they've got an extra two years of warranty compared to the regular

67
00:04:58,880 --> 00:05:05,440
ironwolf they've got included data rescue service and they've got a greater

68
00:05:03,120 --> 00:05:09,840
rating for both their per year use and mean time between failure but thing is

69
00:05:08,400 --> 00:05:14,160
we're supposed to be setting a good example for you guys and when i

70
00:05:11,840 --> 00:05:19,360
clarified hey guys so the plan is actually to put all the drives into one

71
00:05:16,800 --> 00:05:23,120
system they sent over the big dogs

72
00:05:21,919 --> 00:05:31,120
meet the exos 16 in its top current capacity

73
00:05:27,680 --> 00:05:35,270
of 16 terabytes

74
00:05:31,120 --> 00:05:36,360
each of these is rated for a massive

75
00:05:36,360 --> 00:05:42,639
550 terabytes per year of access and two

76
00:05:40,560 --> 00:05:46,560
and a half million hours meantime between failure with all of the

77
00:05:44,479 --> 00:05:51,600
vibration sensing and mitigation technology at seagate's disposal to rate

78
00:05:49,199 --> 00:05:58,640
them then for an unlimited number of drives per enclosure

79
00:05:55,120 --> 00:06:02,720
oh let's add them to the pile right

80
00:06:02,720 --> 00:06:08,960
i have never seen this much storage in one place in my

81
00:06:07,840 --> 00:06:13,039
life this is over three

82
00:06:12,000 --> 00:06:17,919
raw petabytes of storage

83
00:06:15,840 --> 00:06:24,960
225 drives times 16 terabytes each the bad

84
00:06:22,639 --> 00:06:28,400
news is seagate says that i have to use the earlier shipments of our drives for

85
00:06:26,720 --> 00:06:31,759
other stuff or send them back so make sure you're subscribed so you don't miss

86
00:06:29,759 --> 00:06:34,639
some more nas building collabs with other youtubers that's one of the ideas

87
00:06:33,600 --> 00:06:39,759
that i had anyway let's use this opportunity to take a

88
00:06:37,280 --> 00:06:44,800
closer look at our enclosure now i am wicked excited about this server

89
00:06:43,039 --> 00:06:52,479
so this is an early model this is a prototype of

90
00:06:48,319 --> 00:06:52,479
their next generation storinator

91
00:06:53,360 --> 00:06:55,840
there we go

92
00:06:59,599 --> 00:07:07,759
now from the outside this looks like a regular old plane storinator with 45

93
00:07:05,599 --> 00:07:11,199
drives typical all sheet metal construction all that

94
00:07:10,000 --> 00:07:13,440
good stuff but

95
00:07:14,840 --> 00:07:20,560
oh that's different

96
00:07:18,240 --> 00:07:24,400
they have stepped up their game so they actually went from a cable-based

97
00:07:22,400 --> 00:07:30,000
backplane system where every single port was individually wired in to these pcb

98
00:07:28,000 --> 00:07:34,160
backplanes so this dramatically simplifies the cabling since they're

99
00:07:31,520 --> 00:07:39,840
just running one quad port sas cable for each of the four bays and it also should

100
00:07:37,919 --> 00:07:43,599
theoretically improve reliability they've also put in some logic to

101
00:07:41,919 --> 00:07:48,080
stagger the spin-ups of the drive so you don't get that same kind of power surge

102
00:07:45,759 --> 00:07:53,599
when you first turn on a storinator and all up to 60 drives are like

103
00:07:51,520 --> 00:07:56,400
start like ramping up very cool

104
00:07:54,960 --> 00:08:01,599
i'm not ready to actually build this thing up yet though because

105
00:07:58,720 --> 00:08:05,599
there is another surprise now i don't know how much of this came

106
00:08:03,440 --> 00:08:08,879
about because of my request or how much they were working on already

107
00:08:08,960 --> 00:08:16,160
but what's this then 45 drives

108
00:08:12,879 --> 00:08:19,199
has finally joined team red

109
00:08:16,160 --> 00:08:21,199
that's right so we've got an AMD epic

110
00:08:19,199 --> 00:08:25,520
processor in here i'm actually not a hundred percent sure exactly what the

111
00:08:23,120 --> 00:08:30,680
model number is and then it's equipped with

112
00:08:27,599 --> 00:08:34,719
what are we looking at here just shy of

113
00:08:30,680 --> 00:08:36,880
128 gigs of RAM now there was a bit of a

114
00:08:34,719 --> 00:08:40,080
mishap on our unit and this is like an engineering sample board the Gigabyte

115
00:08:38,719 --> 00:08:43,279
provided so i couldn't really get a new one two of these memory slots are dead

116
00:08:42,080 --> 00:08:47,279
but this is a hard drive based storage

117
00:08:45,200 --> 00:08:51,360
system so i'm not actually too worried about the extra couple of channels of

118
00:08:49,200 --> 00:08:57,519
memory killing our system performance like it did in our all NVMe nas

119
00:08:54,959 --> 00:09:02,240
with that said i did allude to needing some ssds in order to make up the

120
00:09:00,000 --> 00:09:07,680
difference in capacity between the 60 drive one and the 75 drive one that i

121
00:09:04,480 --> 00:09:09,040
thought 45 drives was working on that is

122
00:09:07,680 --> 00:09:13,360
where these come in and i need to actually find out what the devil they are

123
00:09:11,360 --> 00:09:18,320
all right here we go so there's a feature of zfs called adaptive

124
00:09:15,120 --> 00:09:19,760
replacement cache or arc and essentially

125
00:09:18,320 --> 00:09:24,560
what it does is it takes the most frequently used data from your hard

126
00:09:21,760 --> 00:09:28,320
drives and then stores a second copy on your system memory so that you don't

127
00:09:26,720 --> 00:09:32,160
have to go all the way out to your spinning disks in order to access it and

128
00:09:30,480 --> 00:09:36,000
that's especially important for something like running vms or a database

129
00:09:34,320 --> 00:09:40,480
where a lot of the lookups are going to be to the same few entries we're already

130
00:09:38,640 --> 00:09:42,800
using that on the existing petabyte project you know the one with the two

131
00:09:41,920 --> 00:09:49,360
bays what we are not using is something

132
00:09:45,200 --> 00:09:51,680
called l2r or level two arc thing is

133
00:09:49,360 --> 00:09:56,720
sure you can add hundreds of gigabytes or even terabytes of memory to a system

134
00:09:54,080 --> 00:10:02,080
now but the cost can be quite prohibitive so that

135
00:09:59,440 --> 00:10:08,560
is where ssds come into play so this right here is a 7.68

136
00:10:05,120 --> 00:10:12,080
terabyte SSD so nearly 8 terabyte SSD

137
00:10:08,560 --> 00:10:14,720
from micron that's yeah it's only SATA 6

138
00:10:12,080 --> 00:10:18,880
gigabit per second but even though that's obviously a lot slower than

139
00:10:16,720 --> 00:10:24,079
system memory it's much faster than spinning disks and easily enough to

140
00:10:21,200 --> 00:10:28,959
saturate our 10 gigabit or even a 40 gigabit network connection and then

141
00:10:26,480 --> 00:10:32,399
importantly it is much cheaper than just chucking more RAM into the system

142
00:10:31,279 --> 00:10:37,600
now something to bear in mind here is that

143
00:10:34,720 --> 00:10:41,440
we ended up putting six of these in here in order to get our

144
00:10:39,440 --> 00:10:46,399
raw capacity up to the petabyte that we promised but l2 arc

145
00:10:44,640 --> 00:10:51,279
actually does not scale especially well with a ton of capacity

146
00:10:49,279 --> 00:10:55,120
so it's possible that what we'll end up doing is only using some of it cheating

147
00:10:53,519 --> 00:11:02,000
a little bit but for the time being hey we got a petabyte of capacity also

148
00:10:59,279 --> 00:11:05,680
just like arc it only has a copy of the data on it so it doesn't actually count

149
00:11:03,839 --> 00:11:11,600
towards your total capacity but these are just minor details like you know

150
00:11:08,320 --> 00:11:11,600
which way the drive goes in for

151
00:11:12,839 --> 00:11:20,079
example i know i'm gonna get judged so hard for this but

152
00:11:17,440 --> 00:11:24,160
while i could put a dual 10 gigabit mezzanine cart in right here in this

153
00:11:22,240 --> 00:11:29,680
open gap in the back i have actually decided

154
00:11:26,000 --> 00:11:31,920
to use one of our ancient connectx 2 40

155
00:11:29,680 --> 00:11:35,839
gig infiniband cards because i had a slot that didn't have a

156
00:11:34,000 --> 00:11:38,560
cover on it and i don't know what else to do with this ancient thing anyway and

157
00:11:37,839 --> 00:11:45,279
it's i mean it's 40 gig even though it's an older card it's plenty for

158
00:11:43,200 --> 00:11:49,680
hard drives so that's in there with its 3d printed

159
00:11:47,120 --> 00:11:52,880
bracket from that video a long time ago this is a really nice feature so i've

160
00:11:51,360 --> 00:11:58,000
actually done this the janky way just like hot gluing or double sided taping a

161
00:11:54,800 --> 00:11:59,279
fan on top of my hba cards so these are

162
00:11:58,000 --> 00:12:02,800
the controller cards for all these drives we're going to plug in but hey

163
00:12:01,120 --> 00:12:06,320
now there's a you know an officially sanctioned way to do it

164
00:12:04,399 --> 00:12:11,040
sweet so we've got a cooling fan to take care of all of our add-in cards down

165
00:12:08,160 --> 00:12:15,279
here very nice wow now that they've got so much space here they could definitely

166
00:12:13,680 --> 00:12:19,680
modify the chassis and do a third fan here if they really wanted to that's

167
00:12:17,200 --> 00:12:23,920
freaking awesome uh this is an example of a really old storynator this is from

168
00:12:21,360 --> 00:12:28,160
about five years ago you can see there's a lot of oh there's a lot of things

169
00:12:25,920 --> 00:12:32,560
about it that are a lot worse so this horrible horrible mounting system

170
00:12:30,639 --> 00:12:35,600
for the drives with like these rods holding the thing

171
00:12:34,800 --> 00:12:40,720
down that wasn't great you can see there's a lot less space in here oh right i

172
00:12:39,600 --> 00:12:44,079
haven't even talked about the new mounting system for the drives

173
00:12:42,720 --> 00:12:47,760
okay we're gonna do that but first i want you guys to check out this is how

174
00:12:45,920 --> 00:12:52,000
they used to cable it up what a nightmare we've actually done a

175
00:12:49,839 --> 00:12:57,279
swap for a dead port on one of these things and it was not a lot of fun

176
00:12:54,399 --> 00:13:01,519
this right here is apparently a 3d printed like kind of friction mount and

177
00:12:59,680 --> 00:13:05,360
then that works with the spring mount that they already had on some of the

178
00:13:02,800 --> 00:13:09,120
newer chassis along with that pcb backplane which apparently makes it

179
00:13:06,880 --> 00:13:12,320
easier to align the slots perfectly so that it's easier to put the drives in

180
00:13:10,480 --> 00:13:17,200
and out so let's see if that actually worked out

181
00:13:18,639 --> 00:13:27,360
all right so in used to be fine anyway but oh oh okay

182
00:13:23,839 --> 00:13:29,920
that's not bad so all that remains now

183
00:13:27,360 --> 00:13:34,079
is to install 60 drives well 58 i already did two

184
00:13:31,760 --> 00:13:38,240
you can tell this one is very prototype it's got lots of scratches and dings and

185
00:13:36,000 --> 00:13:41,760
stuff i believe this really was their working sample of it

186
00:13:40,079 --> 00:13:47,399
before they sent it over to me you know you could call it used or you could call

187
00:13:44,079 --> 00:13:47,399
it pre-tested

188
00:13:51,760 --> 00:13:55,760
home stretch and

189
00:13:56,320 --> 00:14:06,959
900 60 terabytes of raw spinning storage

190
00:14:03,120 --> 00:14:08,320
along with 46 terabytes of SSD storage

191
00:14:06,959 --> 00:14:12,639
for a total of one petabyte

192
00:14:10,480 --> 00:14:18,000
in a single chassis but i'm not quite done yet right now the

193
00:14:15,440 --> 00:14:22,959
one thing that's missing here is a slog device so i already talked about read

194
00:14:20,639 --> 00:14:26,959
caching which doesn't really have any dangers associated with it because

195
00:14:24,639 --> 00:14:32,240
you're just making copies of this data to put on your RAM or on your l2 arc but

196
00:14:30,160 --> 00:14:36,959
right caching write caching is something you can do with zfs and you can use your

197
00:14:34,639 --> 00:14:41,920
memory for it but the problem is that in the event of a sudden power loss which

198
00:14:39,360 --> 00:14:45,600
who knows could happen any in-flight data that's sitting in RAM but hasn't

199
00:14:43,600 --> 00:14:50,480
been committed to your hard drives yet will be lost so it might be worthwhile

200
00:14:48,079 --> 00:14:54,959
adding something like an octane SSD to one of our pci express lots over here to

201
00:14:52,800 --> 00:14:58,639
handle caching data that is being written so that we won't lose it in the

202
00:14:57,199 --> 00:15:03,279
event that it's sitting in RAM limbo and hasn't been

203
00:15:00,480 --> 00:15:06,639
committed to persistent storage all that though is going to be reserved for part

204
00:15:05,040 --> 00:15:10,480
two where Anthony and i are going to team up to get this thing up and running

205
00:15:08,959 --> 00:15:15,120
on the network so we can start offloading some of the data from the

206
00:15:12,000 --> 00:15:18,240
original vault to the new consolidated

207
00:15:15,120 --> 00:15:20,639
vault yes my friends the capacity of 2

208
00:15:18,240 --> 00:15:26,160
volts is now 1 volt isn't technology amazing

209
00:15:23,519 --> 00:15:29,440
speaking of amazing technology pulseway is a real-time remote monitoring and

210
00:15:27,680 --> 00:15:33,680
management software that helps you fix problems on the go by sending commands

211
00:15:31,440 --> 00:15:38,320
from any mobile device it's compatible with Windows mac and Linux and pulseface

212
00:15:36,639 --> 00:15:43,360
single app gives you remote desktop functionality you can get real-time

213
00:15:40,160 --> 00:15:45,199
status system resources logged in users

214
00:15:43,360 --> 00:15:49,759
network performance you can manage Windows updates and more with pulseway

215
00:15:47,680 --> 00:15:54,560
you can create and deploy custom scripts to automate your it task and you can

216
00:15:52,000 --> 00:15:58,079
scan install things update all your systems on the go it's super cool and

217
00:15:56,240 --> 00:16:02,240
super powerful and you can try it for free at pulseway.com or through our link

218
00:16:00,560 --> 00:16:08,240
in the video description so thanks for watching guys if you're looking for something else to watch why

219
00:16:05,519 --> 00:16:12,320
not check out the epic saga that was our NVMe storage server upgrade yeah we're

220
00:16:10,800 --> 00:16:17,440
basically replacing the whole server room right now if you guys didn't sort

221
00:16:14,560 --> 00:16:17,440
of pick up on that
