WEBVTT

00:00:00.080 --> 00:00:05.920
oh is this muronic this is not i know that della's new pneumonic yes well no

00:00:03.679 --> 00:00:10.800
the dell is you called it new pneumonic

00:00:08.320 --> 00:00:15.839
confirmed maybe i'm just a sucker for punishment but after this thing kicked

00:00:13.120 --> 00:00:20.240
my ass for over a year when i was trying to deploy it as our main video editing

00:00:18.000 --> 00:00:26.160
nas at the office it's back baby

00:00:21.920 --> 00:00:28.560
new new one wanted to but this time i'm

00:00:26.160 --> 00:00:33.520
gonna be deploying it at home thanks to our friends at keoxia who sponsored this

00:00:30.960 --> 00:00:40.320
video sending over that's right the Intel drives are out and these keoxia

00:00:36.719 --> 00:00:43.280
cd6 drives are in they sent over 12 of

00:00:40.320 --> 00:00:48.800
their 4 terabyte cd6s which is gonna make this the ballinest naz on the block

00:00:46.559 --> 00:00:53.920
not four eight eight they sent over eight terabyte drives

00:00:50.800 --> 00:00:56.480
well seven point six i think holy this

00:00:53.920 --> 00:00:58.800
is gonna be like a hundred terabyte NVMe nest

00:00:57.440 --> 00:01:02.399
for my house how did i get this out of the office

00:01:09.760 --> 00:01:16.880
the first thing we're going to need to do today is downgrade this server because as

00:01:15.520 --> 00:01:24.880
because as much as i would have loved to take off with 256 gigs of ecc ddr4 memory i think Jake

00:01:22.799 --> 00:01:29.200
would have crapped a brick when all of a sudden he goes to like perform

00:01:26.799 --> 00:01:33.840
maintenance on a server upgrade it and all of our high spec memory is gone

00:01:32.000 --> 00:01:39.840
so let's get all that out of here and while we're at it our 32 core epic 7502p

00:01:37.759 --> 00:01:43.520
that's gonna have to go too the good news though is that the lowest end chip

00:01:42.000 --> 00:01:48.720
that we had to replace it with sitting in the office was a 7402p

00:01:45.920 --> 00:01:51.759
which is still really freaking fast and 24 course i think yeah

00:01:50.640 --> 00:01:56.320
nice epic rome is such an amazing platform

00:01:54.000 --> 00:02:02.560
because even the lowest end chips still have 128 lanes of pci express gen 4.

00:02:00.320 --> 00:02:07.200
that means that even if i populated all 24 bays on this chassis could have

00:02:04.640 --> 00:02:11.920
access to the full bandwidth of a pci express gen 4x4 link now on this

00:02:09.840 --> 00:02:15.760
particular board four of my bays are gonna be running at pci express gen

00:02:13.920 --> 00:02:20.160
three speed i think it might be the ones connected here yeah it's this card right

00:02:18.239 --> 00:02:24.800
here that's in a gen three slot but that's not a limitation of the CPU these

00:02:22.000 --> 00:02:30.160
things are absolutely nothing fuzz honestly by the time we populated all 24

00:02:27.440 --> 00:02:34.720
of those bays with gen 4 NVMe drives we would run into memory bend with

00:02:31.920 --> 00:02:40.080
bottlenecks just trying to read or write to them so it's a non-issue for us

00:02:37.840 --> 00:02:44.959
especially because i'm not even using 3200 megahertz memory which is the

00:02:42.080 --> 00:02:50.879
maximum spec for epic rom i ended up with these just about the most pinner

00:02:47.840 --> 00:02:54.319
ddr4 ecc that you can get these are

00:02:50.879 --> 00:02:56.640
eight gig 2666 modules they're from

00:02:54.319 --> 00:03:02.560
crucial so i expect them to be reliable but they are not going to be fast next

00:02:59.599 --> 00:03:06.720
order of business finally is an actual upgrade one of the things that my server

00:03:04.640 --> 00:03:10.560
needs to do that most storage servers wouldn't is transcode video on the fly

00:03:09.200 --> 00:03:16.319
i've been pretty upfront about the fact that i have an extensive blu-ray collection but i consider the act of

00:03:14.080 --> 00:03:21.680
grabbing a physical disk and putting it into a drive to be a very last decade

00:03:19.360 --> 00:03:26.239
kind of thing to do so in spite of the fact that ripping your own blu-rays is

00:03:23.840 --> 00:03:30.400
kind of a legal gray area i consider it to be morally okay and besides like what

00:03:28.480 --> 00:03:34.560
am i what am i gonna do have a bunch of blu-ray players all over the house like

00:03:32.319 --> 00:03:40.000
physically go and get the disc to put in the like come on all of it is going to be

00:03:37.519 --> 00:03:44.239
served from here but because not every device is capable of playing back a full

00:03:42.400 --> 00:03:50.159
quality blu-ray that's where your GPU comes in so this

00:03:46.400 --> 00:03:51.920
is just a pretty basic gtx 1050 from MSI

00:03:50.159 --> 00:03:57.760
that happens to be low profile so it'll fit in our case and this is gonna handle

00:03:54.319 --> 00:03:59.599
down scaling our original like 4k HDR or

00:03:57.760 --> 00:04:02.640
whatever to something that's a little more

00:04:00.799 --> 00:04:06.159
smartphone friendly or whatever it is that you need of course in order to get it in here

00:04:04.959 --> 00:04:12.080
we're going to have to do a little bit of reconjiguity with step one being yep

00:04:09.760 --> 00:04:16.320
removing the fans now this might seem like a bad idea but

00:04:14.640 --> 00:04:21.359
given that we're installing this card in a server and these heatsink fins are

00:04:18.880 --> 00:04:26.080
actually oriented the correct way i would expect this card to run just fine

00:04:24.240 --> 00:04:29.280
even without the fans on it so we're going to go ahead and pop it in here and

00:04:27.440 --> 00:04:32.400
obviously we'll find out if we're wrong about that later next we've got to

00:04:31.040 --> 00:04:37.600
upgrade the onboard networking because believe it or not this server only comes

00:04:34.800 --> 00:04:42.000
with gigabit Ethernet but that's perfectly normal because while gigabit

00:04:39.600 --> 00:04:45.280
could ship it with onboard 10 gig for example

00:04:43.040 --> 00:04:50.960
anyone rolling a server like this is going to be running 100 gig 200 gig or

00:04:48.960 --> 00:04:55.280
even higher in order to take advantage of the insane speeds of the drives that

00:04:53.360 --> 00:04:58.560
you can hook up to it so those 10 gig chip sets would be

00:04:56.880 --> 00:05:03.680
completely wasted cost because they wouldn't be used even for us this 10 gig

00:05:01.520 --> 00:05:08.560
card is going to be a placeholder because we still don't know exactly what

00:05:05.759 --> 00:05:12.479
kind of switch or realistically switches we're gonna use so we haven't bothered

00:05:10.800 --> 00:05:16.479
sourcing any fiber optic transceivers yet now one quirk of this particular

00:05:14.160 --> 00:05:20.800
motherboard is that this top slot is labeled optional in the manual and as

00:05:18.880 --> 00:05:25.440
far as we can tell in its current configuration it is not working we think

00:05:22.960 --> 00:05:29.360
that's because this mezzanine slot here takes the bandwidth that would otherwise

00:05:27.039 --> 00:05:32.880
be allocated to it so four of my slots at the front of the

00:05:30.960 --> 00:05:36.320
case are actually not going to be usable so i'm just going to electrical tape the

00:05:34.240 --> 00:05:40.560
connectors here and tuck them away in the chassis but that's okay because

00:05:38.880 --> 00:05:44.639
remember how i said that if we actually populated all 24 of these bays with the

00:05:42.800 --> 00:05:48.240
drives that we're using we'd run into memory bandwidth bottlenecks before we

00:05:46.800 --> 00:05:54.560
actually were able to use up the full speed well that's true so there's no point

00:05:52.000 --> 00:05:59.120
filling all the bays anyway of course what we end up using depends on what we

00:05:56.880 --> 00:06:04.800
decide to do with the storage in this server because the sky is the freaking

00:06:01.840 --> 00:06:10.880
limit now you know blu-rays are 50 gigs they're big but that is nothing for an

00:06:07.759 --> 00:06:14.000
NVMe 1.4 compliant drive that is capable

00:06:10.880 --> 00:06:17.440
of throughput in excess of six gigabytes

00:06:14.000 --> 00:06:20.000
per second so we could do everything

00:06:17.440 --> 00:06:25.520
from use it as a storage server which obviously i'll do and i can stream my

00:06:22.639 --> 00:06:30.160
plex movies to as many devices as i freaking want but i could do more than

00:06:27.759 --> 00:06:34.720
that for example all of the computers in the house instead of actually having

00:06:32.479 --> 00:06:39.680
drives installed on them they could all network boot off of this array so that

00:06:37.440 --> 00:06:43.840
all of their storage is safe and centrally stored in one place that'd be

00:06:42.319 --> 00:06:48.479
really cool because not only are the drives built to an enterprise standard

00:06:45.840 --> 00:06:52.479
that means full power loss protection a variety of different security options

00:06:50.240 --> 00:06:58.319
available and depending on whether you go for the regular cd6 series or the

00:06:55.680 --> 00:07:03.360
read intensive cd6 series anywhere from three to one drive full full drive rate

00:07:01.919 --> 00:07:10.319
per day endurance but because we're going to be running them in a zfs array we get access to all

00:07:07.599 --> 00:07:14.720
kinds of cool zfs features like for example the ability to really quickly

00:07:12.400 --> 00:07:18.800
and easily create snapshots say if i wanted to back up some or all of my data

00:07:16.960 --> 00:07:22.960
to a server at the office i could totally do that or file system level

00:07:21.120 --> 00:07:28.479
compression which would allow me to effectively stretch the i think it's

00:07:25.880 --> 00:07:33.199
approximately like 90 something terabytes of raw flash storage that's a

00:07:30.880 --> 00:07:37.360
lot but hey if you can have more and it costs you literally nothing then

00:07:35.759 --> 00:07:40.880
that's definitely the way to go you ready

00:07:39.120 --> 00:07:43.520
let's power this puppy on these blowing metron fans they're gonna

00:07:42.560 --> 00:07:46.479
go just give them a second

00:07:46.560 --> 00:07:53.759
there they go see that's why we weren't worried about

00:07:50.800 --> 00:07:58.319
taking the fans off of the GPU because they're going there's your GPU

00:07:56.560 --> 00:08:02.160
fan look at this baller with this like 100

00:08:00.319 --> 00:08:07.199
builds bridge wall actually no but we should mention lttstore.com hey got this

00:08:04.560 --> 00:08:11.599
new CPU reflective design a new lanyard whoa new colors of lanyards

00:08:09.160 --> 00:08:15.039
ltteststore.com while the cli side of things might sound a little daunting

00:08:13.199 --> 00:08:19.280
fortunately for people like Linus you can get zfs on Linux really easily with

00:08:17.520 --> 00:08:22.319
the ui plus it's actually available as a package in the community app store so

00:08:21.039 --> 00:08:28.000
all we gotta do is search up unread community apps and you can copy the url paste it into unraid in

00:08:26.400 --> 00:08:31.120
the plugin installer give that a few seconds

00:08:29.680 --> 00:08:35.519
literally a few seconds it's already done yep and then bam you have a new

00:08:33.279 --> 00:08:37.760
apps tab now there's a lot of really cool

00:08:36.399 --> 00:08:42.240
docker stuff there's a lot of really cool you know unread plugins in here but

00:08:40.159 --> 00:08:46.880
for us all we got to do zfs return

00:08:44.800 --> 00:08:51.200
hit install and now we have zfs

00:08:49.120 --> 00:08:55.839
just that easy it's downloaded still actually

00:08:53.120 --> 00:08:55.839
give it a second

00:08:59.839 --> 00:09:07.040
okay it's done because of the limitation of unraid we still do need to have an

00:09:04.560 --> 00:09:10.720
array so we just grabbed a couple of old SATA drives that are like i wouldn't put

00:09:09.360 --> 00:09:15.760
anything important on them they have reallocated nan flash already like

00:09:13.200 --> 00:09:18.720
they're just there to exist yeah according to the unraid guys they're

00:09:17.040 --> 00:09:22.480
actually gonna fix that in a few versions from now give them you know a

00:09:20.560 --> 00:09:26.080
couple months and yeah theoretically this should be a lot easier and there's

00:09:24.080 --> 00:09:30.560
going to be a ui for zfs as well so which would be sweet yeah but zfs is

00:09:28.720 --> 00:09:33.440
pretty simple actually if we if we go into our command prompt we can just type

00:09:32.000 --> 00:09:38.959
zfs list see no datasets z pool list no data

00:09:36.640 --> 00:09:43.360
sets we have to make our data set yeah are we just going to do a single v dev

00:09:40.959 --> 00:09:46.880
so that's where zfs can get a little complicated for some people how do you

00:09:45.279 --> 00:09:51.040
want to configure it i'm probably just going to go single vw i'm thinking a

00:09:48.800 --> 00:09:54.800
single v dev with raid z2 so that would give us two parity drives yeah yeah so

00:09:52.880 --> 00:09:59.680
we're gonna lose 15 terabytes or whatever one parity drive is probably

00:09:56.640 --> 00:10:01.360
fine you think so literally like data

00:09:59.680 --> 00:10:05.360
center drives and i'm gonna back this up to the office anyway

00:10:03.120 --> 00:10:09.200
fine fine okay we're going to call our z-pool lambo because cause it's

00:10:10.640 --> 00:10:16.640
so we're gonna go down to raid z1 which means one parody drive yeah and mount

00:10:15.440 --> 00:10:20.000
slash lambo is where it's gonna be stored

00:10:18.800 --> 00:10:24.959
if you don't know what it means to have one parity drive it means that one of

00:10:22.399 --> 00:10:29.360
these 12 drives could outright fail which is pretty unlikely and all the

00:10:27.440 --> 00:10:33.519
data would still be completely intact in fact the speed of the array would

00:10:31.519 --> 00:10:36.800
probably be not even degraded to the point where it would be affected you

00:10:35.120 --> 00:10:41.600
probably wouldn't notice there we go 83.8 terabytes raw so sex horrific but

00:10:40.160 --> 00:10:45.680
after parody it'll be a little bit less than that you get 72 terabytes that's

00:10:43.760 --> 00:10:49.440
crazy so that's a lot of sports yeah what was my old one okay while he does

00:10:47.760 --> 00:10:54.480
that we're gonna set up a few other things so one of the nice things about

00:10:51.279 --> 00:10:57.200
zfs is we can create multiple data sets

00:10:54.480 --> 00:11:01.680
within our pool of storage and those are useful for a number of reasons primarily

00:10:59.600 --> 00:11:05.200
that we can define different settings for different data sets they're almost

00:11:03.040 --> 00:11:08.959
like folders but i can say my movie folder doesn't get compression because

00:11:07.360 --> 00:11:13.920
video doesn't compress very well but then i could say my vm storage folder or

00:11:11.680 --> 00:11:20.079
data set in this example does get compression so my old on raid server was

00:11:16.240 --> 00:11:22.240
64 terabytes of spinning hard drives and

00:11:20.079 --> 00:11:27.279
i had uh two terabytes in raid one as a cache oh

00:11:25.360 --> 00:11:31.120
that's pretty nice okay there we go so those are all our data sets

00:11:29.200 --> 00:11:35.440
now we have to do the not so fun part of moving all of the stock on raid things

00:11:33.440 --> 00:11:39.519
over to the zfs bolts some of it we could probably leave on those crappy

00:11:36.959 --> 00:11:41.839
drives like isos come on you could but now that we've already made the thing we

00:11:40.720 --> 00:11:46.560
might as well just change it it's not actually that hard we just have to go in

00:11:44.000 --> 00:11:49.839
to vm manager turn it off

00:11:48.240 --> 00:11:53.120
you know what the most fun part of building something totally overkill like

00:11:51.600 --> 00:11:57.600
this is trying to find a use for it because you

00:11:55.360 --> 00:12:01.680
end up no i'm serious though you end up exploring all these cool new use cases

00:11:59.839 --> 00:12:05.920
that you would never have had any reason to explore yeah like the just the idea

00:12:04.240 --> 00:12:09.839
of having all the storage for all the computers in the house just on this one

00:12:07.920 --> 00:12:14.480
why would anyone do that once we figure out how to do it you don't have to do it

00:12:12.079 --> 00:12:19.120
with such crazy overkill hardware yeah well that's a whole other video so get

00:12:16.560 --> 00:12:22.399
subscribed so you don't miss that cool so now we should see some usage on

00:12:20.720 --> 00:12:28.320
our z pool used six megabytes nice

00:12:24.639 --> 00:12:30.959
zfs is not really designed for NVMe

00:12:28.320 --> 00:12:34.639
storage it works but there are some caveats and things you kind of have to

00:12:32.480 --> 00:12:38.880
do to make sure it plays nicely the people that develop zfs never had in

00:12:36.560 --> 00:12:43.920
mind that your storage was going to be as fast as the memory on the system like

00:12:41.519 --> 00:12:48.240
storage now is probably faster than cash was on a CPU when zfs was created so

00:12:46.720 --> 00:12:53.680
there's a few things we need to do for one the arc cache in zfs which is really

00:12:51.839 --> 00:12:57.760
great for accelerating hard drives we're going to set that to be metadata only so

00:12:56.000 --> 00:13:02.720
it doesn't store any actual files on it just the metadata of the files if you

00:12:59.839 --> 00:13:07.279
use arc with NVMe it's probably going to hurt your array especially at this speed

00:13:04.959 --> 00:13:11.279
these drives are what you might use as an arc cache

00:13:08.720 --> 00:13:16.399
on a hard drive array yeah yeah so level one arc would be RAM but like Jake was

00:13:13.839 --> 00:13:21.040
saying even RAM is not yeah it doesn't that much faster enough another thing

00:13:18.720 --> 00:13:25.200
that's important we have to enable auto trim i think a lot of the times now zfs

00:13:23.279 --> 00:13:29.279
is smart enough to do it on itself but we're going to do it just in case if the

00:13:27.200 --> 00:13:33.360
drive isn't trimmed there's a lot of wasted extra rights that can happen it's

00:13:31.360 --> 00:13:36.639
called right amplification with every right so you want to keep your house in

00:13:34.800 --> 00:13:40.160
order effectively we're also going to disable access time which is something

00:13:38.800 --> 00:13:43.200
you might use for very specific use cases for us doesn't matter at all

00:13:42.000 --> 00:13:50.160
that's going to save us some reading rights you know what i'm just going to set compression on

00:13:47.600 --> 00:13:54.880
across the entire array because the nice thing about lz4 compression on unraid is

00:13:52.880 --> 00:13:59.040
if it's got a big file and it's not compressing it will just give up so

00:13:57.360 --> 00:14:02.240
chances are it's not really going to hurt our performance at all if there's

00:14:00.560 --> 00:14:05.680
problems down the road we can set it to off for say the plex library and then on

00:14:04.000 --> 00:14:09.760
for the vms like i said before yeah i still can't believe that i ended up with

00:14:07.600 --> 00:14:14.639
what was supposed to be new new wanak at the office no new wanted not new new new

00:14:12.480 --> 00:14:19.360
monique no just oh is this nuonic this is new and you want it yes no the dell

00:14:17.600 --> 00:14:23.120
is you called it new pneumonic

00:14:21.839 --> 00:14:27.839
confirmed oh cool do you see him yeah it's just

00:14:24.959 --> 00:14:31.760
everything's there zfs dockers efs media system copy a file over there shall we

00:14:29.920 --> 00:14:36.399
you got a big file i got a big file i got a 100 gig screen cap that i

00:14:33.680 --> 00:14:40.399
accidentally forgot to stop recording so here you go there's a there's your big

00:14:38.560 --> 00:14:45.440
big media file oh i need permission ah oh yes i got a

00:14:43.120 --> 00:14:50.079
whoops so normally this would be managed through the unread gui but well it would

00:14:48.240 --> 00:14:54.240
it would just work by default but mount slash lambo i think we have the

00:14:52.399 --> 00:14:59.199
tone nobody users

00:14:56.399 --> 00:15:04.639
let's just let's just say everything in this folder sure cool that's right all

00:15:01.519 --> 00:15:04.639
right try again

00:15:05.440 --> 00:15:11.839
hey there it goes not bad wow man what a

00:15:08.800 --> 00:15:13.440
big improvement this is gonna be oh well

00:15:11.839 --> 00:15:18.480
there's your limiting factor it's probably smb actually yeah

00:15:16.240 --> 00:15:24.639
so we're it looks like we're single thread limited on here either way this

00:15:21.440 --> 00:15:27.040
is way faster than you have right now we

00:15:24.639 --> 00:15:31.600
should find out how much performance we still have left on the table here just

00:15:29.120 --> 00:15:34.000
by doing a quick this benchmark oh i've got about a minute left transferring

00:15:32.880 --> 00:15:38.880
these files and then you can go ahead and benchmark it what if i just do it right now anyways sure

00:15:37.760 --> 00:15:45.440
i mean i guess that's what we want to know right i mean we're writing at like 15 to

00:15:43.120 --> 00:15:48.720
17 gigabytes a second is that fast enough for you i don't even feel it you

00:15:47.279 --> 00:15:54.079
don't even feel it yeah well it went down to like 10 12

00:15:50.720 --> 00:15:57.199
12 14. that's literally a hundred times

00:15:54.079 --> 00:15:59.040
faster than what i have now 100 times

00:15:57.199 --> 00:16:02.560
i'm gonna need 100 gigabit networking at home well it's a little late we're not going

00:16:01.120 --> 00:16:06.320
to do that we could do that we're not going to do that 25 should we do that

00:16:04.240 --> 00:16:11.040
now obviously 10 to 15 gigabytes a second is a lot less than the sum of the

00:16:08.880 --> 00:16:15.920
speed of all of these drives in a normal data center environment you might be

00:16:13.120 --> 00:16:19.440
able to get a lot closer to raw speed from these drives depending on how

00:16:18.000 --> 00:16:23.279
they're deployed somewhere in the neighborhood of more like 60 to 70

00:16:21.440 --> 00:16:27.360
gigabytes a second one of the big challenges though one of the best

00:16:24.720 --> 00:16:31.600
benchmarks of how good a plex server is is of course whether you can enable

00:16:29.279 --> 00:16:36.160
subtitles growing uniquely i'm on the outdoor access point now we're playing

00:16:33.279 --> 00:16:40.880
original quality with subtitles baked in so i'm just gonna leave this here

00:16:38.560 --> 00:16:46.399
right epic gives zero f's about this workload

00:16:44.560 --> 00:16:50.079
we could conceivably have a dozen people doing the same thing at the same time

00:16:48.320 --> 00:16:56.000
basically and the best part of all of this zero funky behavior between these drives

00:16:53.519 --> 00:16:59.279
and this server match freaking made in heaven which leaves no not that one

00:16:58.000 --> 00:17:04.640
i was gonna pull the one that had the nice cd6 logo on it anyway Jake is doing

00:17:02.320 --> 00:17:10.079
the demo where we show that video playback is completely uninterrupted

00:17:07.520 --> 00:17:14.160
even by the complete loss of one of our drives which is very unlikely to happen

00:17:11.919 --> 00:17:18.640
there we go look unavailable degraded but it works look it did have a

00:17:16.640 --> 00:17:21.679
little hiccup but hiccup get it i get it but look now

00:17:20.160 --> 00:17:26.400
they're working again i think i might be in love just like i love telling you

00:17:23.679 --> 00:17:30.880
about our sponsor cheoxia you can get all the details on their cd6 series

00:17:28.720 --> 00:17:35.200
enterprise drives at the link in the video description but the main things

00:17:32.799 --> 00:17:40.400
you need to know high quality nand lightning fast pci express gen 4

00:17:37.280 --> 00:17:42.000
interface and of course their long time

00:17:40.400 --> 00:17:46.080
proven track record for delivering reliable enterprise grade drives if you

00:17:44.799 --> 00:17:51.840
guys are looking for another video to watch you can maybe check out the last time we tried to deploy this server with

00:17:49.840 --> 00:17:58.160
someone else's drives when it didn't it did not go well

00:17:54.080 --> 00:17:58.160
that's a bit of an understatement
