WEBVTT

00:00:00.800 --> 00:00:06.480
oh damn it what is that a total 128 course

00:00:04.160 --> 00:00:12.800
64. but good try and a terabyte of memory that's 256 gigs oh supermicro

00:00:10.080 --> 00:00:18.880
told us that we're not allowed to build this server ourselves they have to build

00:00:15.679 --> 00:00:22.160
it for us naturally we said no so we are

00:00:18.880 --> 00:00:23.840
going to be taking one of the thin 1u

00:00:22.160 --> 00:00:27.920
storage servers from the petabyte of flash project and seeing just how fast

00:00:26.080 --> 00:00:36.640
we can drag race it with a handful of keyoxia cd6 drives octane acceleration

00:00:31.480 --> 00:00:38.879
64 epic cores the fastest we can get and

00:00:36.640 --> 00:00:43.680
is this 200 gigabit per second network it's 200 gigabits and 256 gigs of memory

00:00:42.079 --> 00:00:47.920
the optin's just for boot this thing's gonna be crazy fast almost as fast as i

00:00:45.920 --> 00:00:51.920
can segment to our sponsor glasswire are you having poor quality video meetings

00:00:49.920 --> 00:00:56.960
use glasswire and instantly see what apps are wasting your bandwidth during

00:00:53.760 --> 00:01:01.760
your meeting and block them get 25 off

00:00:56.960 --> 00:01:01.760
today using code Linus at the link below

00:01:09.280 --> 00:01:16.560
in a way a server is a lot more like a laptop than it is like a commodity

00:01:14.320 --> 00:01:22.320
desktop made of off-the-shelf components because they tend to be way more

00:01:19.280 --> 00:01:24.640
tailored to a specific use case and

00:01:22.320 --> 00:01:30.000
they're not really as flexible say you want to build a storage server there's a

00:01:27.600 --> 00:01:34.079
dozen different ways to skin that cat pardon the expression for example here

00:01:31.759 --> 00:01:38.560
at Linus media group our primary concern is getting as much capacity as possible

00:01:36.240 --> 00:01:43.360
at the lowest possible price yeah so we're willing to give up some compute in

00:01:40.720 --> 00:01:47.360
favor of stuffing more drives into a single chassis that's why our storage

00:01:45.439 --> 00:01:52.720
servers tend to be this thick or this thick the reality is that 4k or even 8k

00:01:50.720 --> 00:01:56.479
video editing is pretty demanding especially if you've got you know 10 or

00:01:54.560 --> 00:02:01.840
a dozen editors working off the server at once compared to enterprise or

00:01:59.600 --> 00:02:08.319
scientific applications it's not even close so that is why any

00:02:05.840 --> 00:02:14.800
good server deployment starts with the chassis this right here is the super

00:02:10.479 --> 00:02:16.720
micro super server 1124 us and it is all

00:02:14.800 --> 00:02:23.360
about density not storage density because if we went

00:02:19.760 --> 00:02:24.319
with a 2u remember that dual layer old

00:02:23.360 --> 00:02:30.080
wanik they absolutely could pack in more drives but they choose not to because

00:02:28.480 --> 00:02:33.680
you're going to run into performance bottlenecks if you don't have enough

00:02:31.520 --> 00:02:36.400
compute and that's the density that we're increasing here by going with

00:02:35.120 --> 00:02:42.640
these 1u's each layer of this 12 drives yes

00:02:40.160 --> 00:02:46.879
but 2 cpus so no bottlenecks right that's the idea

00:02:45.280 --> 00:02:50.480
super micro only sells these as a complete system these days meaning that

00:02:48.800 --> 00:02:54.959
it must leave their warehouse with a minimum of two cpus four sticks of RAM

00:02:53.120 --> 00:02:59.680
and at least one storage drive and the intention there is for them to be able

00:02:56.959 --> 00:03:02.879
to ensure quality and compatibility and then as a side benefit obviously they

00:03:01.440 --> 00:03:05.840
make some money off the parts but because of petabyte of flash project we

00:03:04.560 --> 00:03:08.640
were able to get our hands on some bare bones ones so let's take a closer look

00:03:07.760 --> 00:03:13.440
wow built in on board you've got dual sfp

00:03:11.519 --> 00:03:20.000
ports are those 10 gig or all four of those are 10k all four of these ports

00:03:15.760 --> 00:03:22.560
rj45 and sfp are 10 gig dual usb 3s

00:03:20.000 --> 00:03:28.959
we've got an ipmi management port serial vga that'll have that vga as well as two

00:03:26.400 --> 00:03:32.319
PCIe 16x slots back here and what do we got for power there's three there's

00:03:30.560 --> 00:03:34.799
three oh there's a third one oh look at that oh wait there's actually four

00:03:33.599 --> 00:03:39.280
there's there's one more like hidden inside we'll see that later oh cool okay let's have a look at our power supply

00:03:38.000 --> 00:03:47.760
obviously dual up to 64 core cpus wow

00:03:43.920 --> 00:03:49.760
1200 watt power supply huge strictly

00:03:47.760 --> 00:03:53.280
speaking this they didn't actually send us a bare bones they sent us a completed

00:03:51.440 --> 00:03:58.159
one and we took it apart oh do i ever have the story for you on a

00:03:56.239 --> 00:04:02.319
call for this project the super micro guy was like you know taking out a PCIe

00:04:00.319 --> 00:04:06.400
card that's easy but you know get to a CPU there's there's pins and thermal

00:04:04.159 --> 00:04:11.360
paste i'm like bro i have probably taken out slash

00:04:08.400 --> 00:04:16.799
installed at least a thousand cpus yeah is it wrong for me to just love looking

00:04:14.720 --> 00:04:21.440
at thermal solutions for super thin systems like this really you know what

00:04:18.000 --> 00:04:23.280
i'm looking at the RAM slots it's like

00:04:21.440 --> 00:04:26.720
sixty percent of the width of the server is just RAM slots it's a forest of

00:04:25.199 --> 00:04:32.240
memory slots why aren't we putting more memory in then um fetch me more memory

00:04:29.520 --> 00:04:36.720
no no no the thing with epic is we want to have all of the channels filled out

00:04:34.000 --> 00:04:41.360
so that's 8 per CPU but once you add more it can be harder to hit the same

00:04:38.800 --> 00:04:45.759
speed and the same latency and speed and latency of your memory is

00:04:43.360 --> 00:04:50.880
super super important if you're running software raid which is exactly what

00:04:48.479 --> 00:04:54.240
we're going to be doing with zfs we're using dfs right well for now just

00:04:52.800 --> 00:04:57.680
to test it but the actual deployment is going to be using weka fs which is a

00:04:56.160 --> 00:05:01.280
different thing that costs hundreds of thousands of dollars but seems to be

00:04:59.360 --> 00:05:04.479
software raid too so yeah i guess

00:05:02.960 --> 00:05:08.720
whoa what oh what the hell

00:05:06.479 --> 00:05:14.479
that's cool you dropped something it comes out as one big fat mama of a

00:05:11.840 --> 00:05:16.960
module here i love it i don't know where that's right

00:05:17.440 --> 00:05:23.039
now this is a fun fact small fans not

00:05:20.960 --> 00:05:26.560
great at moving a ton of air because i've got these little tiny tiny

00:05:25.440 --> 00:05:31.039
blades but what they are really good at is generating a ton of static pressure

00:05:29.759 --> 00:05:34.800
which is really important in a deployment like this see look at the

00:05:33.120 --> 00:05:38.720
front of this chassis it's going to be all full of drives in there right and in

00:05:37.039 --> 00:05:43.440
order to fill it with drives you've got to have a backplane for them to connect

00:05:40.479 --> 00:05:47.840
to well that back plane has a hard pcb and you can see that there's only tiny

00:05:46.160 --> 00:05:52.240
little gaps in it wherever they were able to get a little hole to draw air

00:05:50.479 --> 00:05:56.000
through the front of this chassis they need to generate enormous static

00:05:54.320 --> 00:06:02.400
pressure in order for there to be enough airflow to force over the CPU's memory

00:05:59.120 --> 00:06:04.880
power supply and pci express cards did i

00:06:02.400 --> 00:06:08.800
say power supply power supplies they're redundant in the event that one fails

00:06:07.039 --> 00:06:12.560
and it's also super useful for connecting your server to two

00:06:10.479 --> 00:06:16.880
independent power sources in case your power source fails side note Jake i

00:06:14.960 --> 00:06:21.520
think this might be the thickest pcb i've ever seen holy sh i mean you want

00:06:19.280 --> 00:06:24.240
rigidity obviously especially somewhere where there's going to be mechanical

00:06:22.800 --> 00:06:27.840
strains on the device it's like two sticks of RAM yeah thickness here let's

00:06:26.160 --> 00:06:32.479
get a shot of this just for context here's a stick of memory i think it's

00:06:29.360 --> 00:06:34.000
more like three Jake what that pcb is is

00:06:32.479 --> 00:06:39.919
almost an eighth of an inch thick that's crazy oh this is interesting you can see that

00:06:37.280 --> 00:06:44.720
in order to avoid recycling any of the hot air back to the other side they've

00:06:42.080 --> 00:06:48.160
got these little like rubber curtain things anywhere where cables

00:06:46.639 --> 00:06:52.080
have to pass between the front of the chassis and the back and that's not the

00:06:50.400 --> 00:06:56.479
only cable management trick it's got up at sleeve power runs up this side but

00:06:54.400 --> 00:07:02.240
the front enclosures also need pci express connections for the NVMe drives

00:06:59.039 --> 00:07:04.720
and all of those are flat connectors

00:07:02.240 --> 00:07:11.199
check this out that run right in between these memory slots to these sick

00:07:08.400 --> 00:07:15.360
freaking PCIe connectors that go into the motherboard and do they have any

00:07:13.360 --> 00:07:19.199
cards for them no they just all come directly off the board yeah there's the

00:07:17.360 --> 00:07:23.599
little ones over here too of course you can add even more NVMe storage if you

00:07:21.120 --> 00:07:28.000
wanted to there's the three excuse me four PCIe slots here oh wow

00:07:26.160 --> 00:07:31.840
this is see he this little guy he's right here oh there it is that's where

00:07:30.000 --> 00:07:38.319
we're gonna put our octane oh wait actually no it doesn't fit ah oh god no

00:07:34.400 --> 00:07:41.120
it's fine oh cool okay so this is a dual

00:07:38.319 --> 00:07:48.240
riser on this side you've got a simple PCIe 16x to 16x slot and knowing AMD

00:07:45.759 --> 00:07:52.400
epic it's probably running at full speed that's right actually all of this is

00:07:50.000 --> 00:07:58.560
just going to be full speed PCIe gen4 and then over on this other side we've

00:07:54.000 --> 00:08:02.479
got i believe this is a PCIe 32x slot oh

00:07:58.560 --> 00:08:04.879
that is crazy Jake it is a 32x slot you

00:08:02.479 --> 00:08:10.720
can see they've actually got the pins that correspond to each of the 16x slots

00:08:08.000 --> 00:08:16.479
silk screened onto the pcb well if you think that one's crazy look up here

00:08:12.319 --> 00:08:18.720
that's amazing i love it there's 1 16

00:08:16.479 --> 00:08:22.800
another 16 and then a what is that eight it's an 8x right over here i think some

00:08:20.720 --> 00:08:28.240
of the nvmes run off of this oh you know what the 8x is running these sfp ports

00:08:26.319 --> 00:08:33.680
at the back oh no bottlenecks and then

00:08:30.960 --> 00:08:39.760
one slot and then those are two more 8x NVMe connections running to the front

00:08:35.440 --> 00:08:41.919
yep there's so much PCIe and again very

00:08:39.760 --> 00:08:46.480
purpose built right yeah you can put a GPU in here though look

00:08:43.839 --> 00:08:51.440
GPU power if you wanted like an a100 or something again if you had a very

00:08:48.480 --> 00:08:55.680
purpose build specific use case oh my god i mean we did see a storage

00:08:53.519 --> 00:09:00.000
deployment recently where GPU acceleration was used for raid

00:08:58.320 --> 00:09:03.680
parity data i have a bit of an update on that one wendell

00:09:01.600 --> 00:09:09.040
informed us that there could be some issues with that particular solution we

00:09:05.839 --> 00:09:12.000
tested it there is an issue

00:09:09.040 --> 00:09:15.440
basically we we stopped the array edited one of the drives i think we edited 32

00:09:13.760 --> 00:09:19.279
bytes of it to be something different started it up

00:09:17.040 --> 00:09:24.240
and it didn't fix it it just has no error handling what is it is depending

00:09:22.640 --> 00:09:28.480
on the drive to tell it that there's an error Jake i just realized something i was

00:09:26.640 --> 00:09:32.000
trying to figure out why the front of the slot was over here and i was like

00:09:30.240 --> 00:09:38.000
right that's where the power pins are yeah so it's got normal size power pins

00:09:35.440 --> 00:09:41.600
and then these itty-bitty higher density data pins now the goal today is to see

00:09:40.080 --> 00:09:46.640
how the system would perform if you were just to set it up yourself with something like zfs but even with a more

00:09:45.040 --> 00:09:52.000
optimized and actually specifically built for NVMe file system like weka fs

00:09:49.680 --> 00:09:56.000
you still need a lot of CPU compute to handle things like networking the actual

00:09:54.000 --> 00:09:59.600
connection to the NVMe drives themselves and any sort of networking overhead

00:09:57.600 --> 00:10:05.360
fortunately AMD stepped up to the plate and provided 12 of their 7543 epic

00:10:02.800 --> 00:10:09.600
processors so those are 32 core each for a total of 64 cores in each of our six

00:10:07.839 --> 00:10:15.040
servers these are configurable to a max TDP of 240 watts and a max boost clock

00:10:12.399 --> 00:10:20.160
of 3.7 gigahertz they're not quite as fast as the 75 f3s we had in the g-raid

00:10:18.000 --> 00:10:23.600
server but they're still plenty potent for what we're trying to do here so uh

00:10:21.920 --> 00:10:27.279
let's get them installed i got i got to prove super micro right here i know how

00:10:25.200 --> 00:10:32.399
to do this david i swear i swear i can put a CPU in

00:10:29.200 --> 00:10:34.800
i don't know watch me screw this oh

00:10:32.399 --> 00:10:38.320
ah you saw nothing you know my my hands coin aren't what

00:10:36.800 --> 00:10:41.789
they used to

00:10:44.839 --> 00:10:52.959
be all right david i'm doing the the most dangerous part here thermal paste

00:10:50.880 --> 00:10:56.399
don't want to mess this up oh i already messed it up

00:10:54.720 --> 00:11:00.000
is there treasure under that look at these bad boys it's crazy to

00:10:58.000 --> 00:11:04.320
think that this could handle a 280 watt CPU like there's just underneath here

00:11:02.480 --> 00:11:08.800
there's going to be a massive vapor chamber that just spans the entire thing

00:11:06.720 --> 00:11:14.399
now it's time for the tedious process of installing all of these sticks of memory

00:11:11.680 --> 00:11:18.000
the dims are made by samsung they are 16 gigs each and they run at 3 200 mega

00:11:16.320 --> 00:11:23.440
transfer per second but the most important thing about them is that they're qualified by super micro for

00:11:21.040 --> 00:11:27.120
this particular server and i get it you know who would want to run unqualified

00:11:25.040 --> 00:11:33.360
memory in their mission critical server it's like drinking from a non-LTT Store

00:11:30.320 --> 00:11:33.360
qualified water bottle

00:11:34.560 --> 00:11:43.200
crazy i think the craziest thing about this memory setup is that it's not even that

00:11:39.680 --> 00:11:45.040
crazy 256 gigs of ecc error correcting

00:11:43.200 --> 00:11:50.160
memory would be mind-blowing for a desktop but for a server this is

00:11:47.760 --> 00:11:55.120
pedestrian this is a storage server we don't actually need to put enormous data

00:11:52.320 --> 00:12:00.560
sets in memory for these cpus or gpus to crunch away at these are to make sure

00:11:57.120 --> 00:12:02.320
that each of our cpus gets two full fans

00:12:00.560 --> 00:12:07.839
worth of dedicated airflow blowing through them in our final deployment as part of our

00:12:05.360 --> 00:12:12.880
petabyte of flash storage project oh we're gonna have six of these acting

00:12:10.079 --> 00:12:18.880
as NVMe over fabric posts and it's kind of similar to iscsi in that your storage

00:12:16.160 --> 00:12:22.639
is in one box over here and then it's connected via networking to your compute

00:12:21.680 --> 00:12:27.839
box but NVMe over fabric was designed

00:12:24.959 --> 00:12:33.680
specifically with NVMe devices in mind so it's way more performant but to push

00:12:31.760 --> 00:12:39.040
that kind of speed you need to make decent use of the drives right and that

00:12:36.160 --> 00:12:44.079
requires a lot of networking a hundred gig

00:12:40.240 --> 00:12:44.079
huh can i get aha

00:12:45.880 --> 00:12:53.600
400 gig is what we're targeting with dual NVIDIA connectx 6 series cards and

00:12:52.399 --> 00:12:57.600
i couldn't help noticing that one of these has a half height bracket on it oh

00:12:55.760 --> 00:13:01.519
you want it on this side yeah it's for cooling for more cooling get them

00:12:59.600 --> 00:13:04.240
separated it's just gonna hang there teamwork in it i'll go at it from one

00:13:02.880 --> 00:13:07.639
side you go at it from the other yeah i think they called that

00:13:08.160 --> 00:13:13.519
i thought you wanted to put this one on this one it doesn't fit oh yeah it's

00:13:12.079 --> 00:13:17.120
fine we can just take too much too much cable oh careful

00:13:18.480 --> 00:13:23.760
yeah we really need to not break any of these if we're going to hit our petabyte

00:13:21.760 --> 00:13:29.680
of flash storage and also we don't want them to be able to say i told you so

00:13:25.360 --> 00:13:29.680
they did tell us not to bother them

00:13:29.760 --> 00:13:35.760
this may be the most overkill boot drive of all time

00:13:33.440 --> 00:13:38.720
especially like it doesn't it's it's not redundant though so it's like actually

00:13:37.120 --> 00:13:44.399
not that great hi in fact many server motherboards most

00:13:42.000 --> 00:13:47.920
even have an internal usb port that is exactly for that that is what it's for

00:13:46.160 --> 00:13:51.440
really yeah it's for just running an os off of usb but just using a cheap thumb

00:13:49.760 --> 00:13:54.800
drive and plugging it in a lot of them also have an internal like little

00:13:53.279 --> 00:13:57.760
powered SATA thing and that's what we'll be using for this one

00:13:56.399 --> 00:14:02.959
uh we're gonna in the real deployment yeah i took it out oh well where does it

00:13:59.680 --> 00:14:04.000
go oh is it the super dom one here yeah

00:14:02.959 --> 00:14:07.760
um something about that doesn't look right yeah i did not put this in right

00:14:06.399 --> 00:14:12.240
yeah oh my god every time you didn't screw this

00:14:09.600 --> 00:14:14.480
in oh i forgot oh i can still access it where'd the screws go just because

00:14:13.519 --> 00:14:16.800
they're through like a

00:14:17.440 --> 00:14:23.839
no that one i screwed oh i didn't screw that one in either last but not least

00:14:21.839 --> 00:14:28.800
storage the actual deployment of this cluster is going to be making use of 12

00:14:25.920 --> 00:14:34.639
cd6 15 terabyte drives per server but because those drives already have like

00:14:31.279 --> 00:14:36.720
specific demo data assigned to specific

00:14:34.639 --> 00:14:40.320
slots we had to be very careful about taking them out did you see my little

00:14:38.000 --> 00:14:44.800
diagram oh no i didn't oh oh yeah baby

00:14:41.760 --> 00:14:46.240
oh my gosh it's perfect okay i mean i

00:14:44.800 --> 00:14:49.839
labeled the drives i didn't want to screw it up that's fair

00:14:48.240 --> 00:14:53.440
that's fair because that would be catastrophic so instead we're going to

00:14:51.360 --> 00:14:56.240
be using these seven terabyte cd6s that we already had laying around and we're

00:14:54.800 --> 00:15:00.399
going to be installing truenast to run zfs on them just to see like if you were

00:14:58.480 --> 00:15:05.279
to buy this server and these drives yeah how much could you get without spending

00:15:02.399 --> 00:15:08.959
400 000 on a file system yeah that

00:15:06.480 --> 00:15:13.199
i mean we're expecting really impressive results even without the fancy file

00:15:10.800 --> 00:15:18.720
system because these are PCIe gen4 drives that are capable of in excess of

00:15:16.240 --> 00:15:22.000
what is over six gigs a second these are exactly six gigabytes six gigabytes a

00:15:20.959 --> 00:15:27.040
second of throughput so put together that's around 75 gigabytes a second the

00:15:25.360 --> 00:15:30.000
interesting thing is the 15 terabyte ones are a little bit slower i think

00:15:28.480 --> 00:15:34.160
they're five and a half gigabytes a second so it works out to be closer to

00:15:31.519 --> 00:15:38.240
like 65 gigabytes a second which is a lot closer to the 50 gigabytes a second

00:15:36.639 --> 00:15:42.880
that our network can do shout out super micro by the way for

00:15:40.079 --> 00:15:46.160
these tula sleds this was so fast compared to when i built that simply

00:15:44.320 --> 00:15:50.959
double server that you had to screw them all in all of them even with one screw

00:15:49.360 --> 00:15:56.480
boy does that ever add a lot of time you kind of have to do two

00:15:52.720 --> 00:15:59.440
so that's like oh 96 screws he's not

00:15:56.480 --> 00:16:05.839
even doing it right and he's complaining are we done that was it that's it

00:16:02.320 --> 00:16:07.920
wow it's so cute freaking crazy i mean

00:16:05.839 --> 00:16:13.440
cute is like not giving it enough credit 400 gigabit per second okay shall we

00:16:11.199 --> 00:16:16.639
plug her in captain it's gonna complain that i only plug in one of these and

00:16:14.880 --> 00:16:20.000
then we'll tell it to shut up this one doesn't have a shut up port but it does

00:16:18.240 --> 00:16:24.320
have a shut up function called unplugging the power supply oh uh well

00:16:22.800 --> 00:16:26.959
we could just plug it in okay we're gonna plug the power supply into our

00:16:25.920 --> 00:16:32.240
server oh that was a little rough oh i got the wrong power cable

00:16:30.320 --> 00:16:37.120
one moment please when i was living at ivonne's house like

00:16:34.079 --> 00:16:38.560
with her parents yeah i had a GPU make

00:16:37.120 --> 00:16:44.320
that noise when i forgot to plug in the PCIe power yeah like an 800 and it made her dog

00:16:41.839 --> 00:16:50.399
throw up this thing is surprisingly quiet for a

00:16:47.040 --> 00:16:53.120
1u i mean i know it's idling but still

00:16:50.399 --> 00:16:56.399
i guess if it's not doing anything yeah well it's nice

00:16:54.399 --> 00:17:01.040
to not have to hear it just oh some of them the power supplies

00:16:58.560 --> 00:17:05.360
no never mind that's still not bad two 32 core processors

00:17:03.199 --> 00:17:09.360
all right we see 13 drives looks good we got our boot drive and our 12

00:17:07.600 --> 00:17:13.679
7 ish terabyte drives it's time to make our pool should we do realistic

00:17:12.079 --> 00:17:17.360
or should we do full send i actually don't think a stripe is going to be that

00:17:15.199 --> 00:17:22.319
much faster honestly we might be best to just do like two

00:17:19.600 --> 00:17:26.959
raid z ones two raid z1 pools would allow us to have two drive failures

00:17:24.720 --> 00:17:29.919
before we actually experienced any data loss and the way Jake's going to

00:17:28.079 --> 00:17:33.679
configure it is with two six-drive v-devs that we will then combine into a

00:17:32.000 --> 00:17:37.440
single pool all right what do what do you want to call this

00:17:35.200 --> 00:17:43.280
i am speed 69 timmy bates you know what it's even

00:17:39.919 --> 00:17:44.880
dot 84 like that's that's double 420 you

00:17:43.280 --> 00:17:49.039
know we're going to make a couple tweaks here

00:17:46.960 --> 00:17:52.960
they've actually updated it so a time is off by default but it's a time it's like

00:17:51.200 --> 00:17:57.679
that it records the access time of the data you only need that for like very

00:17:54.720 --> 00:18:01.679
specific use cases or diagnostics i would think yeah but uh you don't want

00:18:00.240 --> 00:18:05.919
it it's not good for performance unless you actually need it uh we're gonna go

00:18:03.760 --> 00:18:11.039
from 128 to one meg because that's kind of closer to our use case of like video

00:18:08.720 --> 00:18:14.480
which is big files yeah if you were to host like a database or something where

00:18:12.640 --> 00:18:18.080
you have lots of random reads that are small like especially like a text-based

00:18:16.160 --> 00:18:22.960
database uh you would probably want a smaller record size but for us one mega

00:18:20.000 --> 00:18:27.360
skill 128 is the default for a reason yeah that's excellent for a mixed use

00:18:25.280 --> 00:18:32.080
case yeah okay and we want to do one more thing uh we're going to set the arc

00:18:29.520 --> 00:18:36.960
that is the RAM cache of zfs to just be metadata only if you use it for files as

00:18:34.960 --> 00:18:40.000
well when you have such fast backend storage you can actually lose

00:18:38.480 --> 00:18:43.760
performance yeah so setting it to metadata only gives us a little bit of

00:18:41.760 --> 00:18:48.160
acceleration from it but not the same kind that arc would for hard drives so

00:18:46.000 --> 00:18:52.640
we're running an i o depth of 32 which is somewhat unrealistic but two threads

00:18:50.320 --> 00:18:57.200
per NVMe so 24 threads total at a 128k block size you make the server

00:18:55.200 --> 00:19:02.400
go fast your way i'll make it go fast my way here we go ready

00:18:59.360 --> 00:19:05.120
did you just unplug the network oh uh

00:19:02.400 --> 00:19:07.440
maybe maybe but i did it really fast yeah i think you did just unplug the

00:19:06.400 --> 00:19:12.720
network cool well it's fine now 15 17 20.

00:19:11.600 --> 00:19:18.640
now we've done you know 18

00:19:15.840 --> 00:19:22.240
20 gigabytes a second on a zfs pool before

00:19:19.679 --> 00:19:26.559
but what you have to consider now is that we've done that on servers that

00:19:24.000 --> 00:19:30.480
were generally double the thickness so in a cluster deployment where density

00:19:29.360 --> 00:19:35.679
is key you're able to get effectively double

00:19:32.720 --> 00:19:40.320
the performance of your drives by having two one use by adding all that compute

00:19:38.320 --> 00:19:44.320
that's the point of this and that's what made these ideal for our petabyte of

00:19:42.480 --> 00:19:49.520
flash projects we're ramping up baby almost 30 gigs a second oh wow

00:19:46.960 --> 00:19:51.440
look at the CPU usage those cores are

00:19:50.240 --> 00:19:56.880
going i bet you if we switch our tests to a 128 block size

00:19:54.720 --> 00:20:02.559
uh leaving the array at one meg this is going to go even faster yeah there's 22

00:19:59.760 --> 00:20:06.480
20. see if it ramps up even higher five threads at a hundred percent

00:20:05.120 --> 00:20:10.000
well there's more than that if you look at it like realistically so this is a

00:20:08.320 --> 00:20:12.640
write test sequential write we're looking at around 20 gigabytes a second

00:20:11.760 --> 00:20:17.440
as well what that tells us is that we are still

00:20:15.039 --> 00:20:21.760
CPU limited because in theory these drives don't write as fast as they read

00:20:19.679 --> 00:20:25.120
these are a more read optimized data center drive that's actually really

00:20:23.840 --> 00:20:30.240
impressive considering that we're dealing with parity data here though and

00:20:27.120 --> 00:20:32.480
is that a fast bump the threads up a bit

00:20:30.240 --> 00:20:37.280
man i am excited to see what this thing can do when there's another five of them

00:20:35.520 --> 00:20:41.760
in a cluster yep okay so this is a random read 4k block

00:20:40.000 --> 00:20:46.400
size we're doing four threads per drive and a 64q depth this is not only going

00:20:44.320 --> 00:20:51.520
to be a petabyte of flash it's going to be the highest performance setup that i

00:20:49.360 --> 00:20:54.480
look at how i'd ever see dog crap that is

00:20:52.640 --> 00:20:59.039
oh that's a shame a hundred and fifty thousand iops individually

00:20:57.039 --> 00:21:02.480
these drives will do more than millions i'll do a million each

00:21:00.720 --> 00:21:07.679
that's that one meg record size kind of hurting us sheesh look at our cores they're just

00:21:06.320 --> 00:21:13.360
picked wow

00:21:10.000 --> 00:21:14.480
yep 47 threads at 100

00:21:13.360 --> 00:21:20.159
right now poor thing and what about a random right

00:21:17.440 --> 00:21:26.480
poor drives we're just abusing them oh that's embarrassing 20 000 iobs this

00:21:23.679 --> 00:21:29.440
is literally slower than a hard drive but no it's not a hard drive

00:21:28.080 --> 00:21:34.000
sequentially if we were doing 4k random writes to a

00:21:31.600 --> 00:21:37.600
hard drive way slower than this so that's something

00:21:36.159 --> 00:21:42.720
you got to keep in mind about these numbers it's not as simple as just

00:21:40.320 --> 00:21:45.840
megabytes a second the kind of data that you're hitting your storage device with

00:21:44.000 --> 00:21:49.360
makes an enormous difference really that brings us back to what was kind of the

00:21:47.360 --> 00:21:54.559
whole point of this video doesn't it yeah that servers

00:21:51.600 --> 00:21:58.799
have to be designed for the application that they're intended for and these are

00:21:56.400 --> 00:22:03.280
absolutely perfect for what we will be doing with them but not perfect for what

00:22:01.360 --> 00:22:07.120
we do with our regular servers here like we wouldn't replace new nuwanik with one

00:22:05.760 --> 00:22:12.400
of these that's the thing with software raid man random reads and writes are just

00:22:09.679 --> 00:22:15.840
not it but you know what is it our sponsor

00:22:13.919 --> 00:22:19.840
manscaped the new manscaped ultra premium collection is an all-in-one skin

00:22:17.760 --> 00:22:23.520
and hair care kit for the everyday man and covers you from head to toe there's

00:22:21.919 --> 00:22:27.679
the two in one shampoo and conditioner their body wash with cologne scent

00:22:25.360 --> 00:22:33.039
hydrating body spray deodorant and a free gift moisturizing lip balm

00:22:31.360 --> 00:22:36.960
your man maintenance just got easier and best of all all manscapes products in

00:22:34.960 --> 00:22:42.159
the ultra premium collection are cruelty free paraben free and vegan visit

00:22:39.679 --> 00:22:47.280
manscape.com tech or click on the link below for 20 off and free shipping

00:22:45.520 --> 00:22:52.240
if you guys enjoyed this video go check out part one where we got into way more

00:22:49.600 --> 00:22:55.760
depth about the complete configuration including taking a close look at the

00:22:54.880 --> 00:23:02.400
eight GPU server that is going to act as the

00:22:58.880 --> 00:23:03.679
the head controller for the six of these

00:23:02.400 --> 00:23:08.000
that we're gonna have stacks i wanna do a video like this on that server and

00:23:05.440 --> 00:23:12.400
like boot it into Windows just for feeds NVIDIA specifically told us not to mine

00:23:10.400 --> 00:23:15.919
on it let's just do it like could they hate us anymore at this point
