WEBVTT

00:00:00.240 --> 00:00:03.280
i called it

00:00:03.360 --> 00:00:09.840
called what okay but i did though i mean sort of i'm

00:00:07.839 --> 00:00:13.679
pretty sure it was me and i'm also pretty sure it existed before that video

00:00:11.759 --> 00:00:21.600
even came out it doesn't matter the point is meet the supreme raid sr 1000

00:00:18.240 --> 00:00:23.760
it looks like an NVIDIA t1000

00:00:21.600 --> 00:00:29.519
workstation GPU in fact it even has the letters t1000 printed on it and the same

00:00:27.199 --> 00:00:32.719
mini displayport ports are in there but but they're blocked

00:00:30.960 --> 00:00:37.760
by solid metal

00:00:34.559 --> 00:00:39.760
that's because this GPU is not meant for

00:00:37.760 --> 00:00:43.680
graphics and before you say you know where this is going no it's not for

00:00:42.000 --> 00:00:50.480
cryptocurrency either so what the heck is it through some kind of

00:00:47.120 --> 00:00:53.760
software funkery g-raid is using this

00:00:50.480 --> 00:00:55.920
GPU to act as a freaking storage

00:00:53.760 --> 00:01:00.719
accelerator and if they're to be believed which i'm not sure if i do yet

00:00:58.559 --> 00:01:06.080
this thing with the right array of NVMe drives can supposedly sustain transfer

00:01:02.559 --> 00:01:08.000
speeds of over 100 gigabytes per second

00:01:06.080 --> 00:01:12.960
of sequential throughput holy sh

00:01:10.720 --> 00:01:19.920
is what you might say if i didn't segue to our sponsor keoxia their bg5 NVMe SSD

00:01:17.360 --> 00:01:25.360
brings PCIe gen 4 performance to an affordable price for systems and

00:01:22.240 --> 00:01:27.280
notebooks they even make a 2230 sized

00:01:25.360 --> 00:01:31.280
one so your pcs can be lighter and smaller than ever check them out at the

00:01:29.200 --> 00:01:36.479
link in the video description i have so many questions about this

00:01:34.320 --> 00:01:40.720
but before we can even start to answer them we need a little bit of background

00:01:38.479 --> 00:01:44.640
combining multiple storage devices has been a staple of computing for decades

00:01:42.880 --> 00:01:50.560
and generally falls under the umbrella of technologies that we call raid or

00:01:47.680 --> 00:01:54.640
redundant array of independent disks raid can serve a variety of purposes

00:01:53.200 --> 00:02:00.159
improving speed data protection capacity or usually some

00:01:57.840 --> 00:02:04.399
combination of all three of those compared to a single drive

00:02:02.000 --> 00:02:09.679
now traditionally high performance raid required dedicated co-processors

00:02:06.880 --> 00:02:13.440
typically found on hardware raid cards you would slot one of those bad boys

00:02:11.599 --> 00:02:18.160
into your motherboard connect all of your drives to it and it would handle

00:02:15.280 --> 00:02:22.640
both the high throughput of these many disk arrays as well as the parity

00:02:20.959 --> 00:02:27.599
calculations that are required by popular raid configurations like raid 5

00:02:25.360 --> 00:02:31.280
and raid 6. if you want to learn more we actually have a Techquickie on this

00:02:28.800 --> 00:02:34.720
subject from almost 10 years ago but

00:02:32.400 --> 00:02:40.080
raid cards have a problem as we've transitioned from mechanical

00:02:36.800 --> 00:02:42.800
drives to solid state and then to NVMe

00:02:40.080 --> 00:02:46.319
storage devices have gotten so fast that raid cards haven't been able to keep

00:02:44.480 --> 00:02:50.480
pace turning them into a performance bottleneck so the current meta is to

00:02:48.239 --> 00:02:57.280
connect your storage devices directly to your CPU via pci express this improves

00:02:54.080 --> 00:03:00.080
both the throughput and latency but

00:02:57.280 --> 00:03:05.920
requires the CPU to handle those parity calculations and any other overhead this

00:03:02.800 --> 00:03:08.000
is called software raid and

00:03:05.920 --> 00:03:12.959
in some ways it's actually kind of a big step backward first of all cpus are

00:03:10.879 --> 00:03:17.280
freaking expensive and in more ways than you might think a lot of enterprise

00:03:15.440 --> 00:03:22.480
software is licensed according to how many cpus or how many cores are present

00:03:20.239 --> 00:03:27.280
in your server so you better believe that big businesses are all about

00:03:24.560 --> 00:03:31.280
squeezing the absolute most out of every box

00:03:28.159 --> 00:03:33.280
also cpus are generalized processors i

00:03:31.280 --> 00:03:37.680
mean you can brute force it here's us hitting 20 gigabytes a second in

00:03:35.280 --> 00:03:43.519
software raid but the issue is that even with a 32 core epic processor we're

00:03:40.000 --> 00:03:46.080
looking at a lot of utilization here

00:03:43.519 --> 00:03:51.120
just to manage storage and if you compare that to the theoretical combined

00:03:48.159 --> 00:03:56.319
read speed of around 75 gigabytes a second for our 12 kiosks cd6 drives you

00:03:54.400 --> 00:04:00.799
can see that we were leaving a lot of performance on that table

00:03:58.239 --> 00:04:06.080
if i'm a server vendor that's too many wasted CPU cycles that my customers now

00:04:03.519 --> 00:04:09.840
can't allocate to something useful or can't rent out to their customers

00:04:08.879 --> 00:04:14.239
that is where g-raid comes into play

00:04:12.239 --> 00:04:21.440
the most obvious difference here right out of the gate is that there is no port

00:04:17.040 --> 00:04:22.479
to plug a drive into never mind 8 or 16

00:04:21.440 --> 00:04:27.600
drives instead the drives connect directly to

00:04:25.120 --> 00:04:31.759
the CPU's PCIe lanes just like they would with software raid so this server

00:04:30.160 --> 00:04:36.160
from Gigabyte handles all of that through a backplane here in the front of

00:04:33.440 --> 00:04:42.400
the chassis then the g-raid card just plugs into any available PCIe gen4 slot

00:04:39.840 --> 00:04:46.160
we'll do that a little bit later and all the storage communication happens over

00:04:44.240 --> 00:04:51.199
the PCIe bus no direct connection between our

00:04:48.479 --> 00:04:56.479
raid card and our drives weird i guess we won't need any of our

00:04:53.280 --> 00:04:58.400
new cable ties from lttstore.com

00:04:56.479 --> 00:05:04.880
are available in so many colors and you might be thinking gee Linus even at gen

00:05:01.360 --> 00:05:07.039
4 speeds a 16x PCIe slot can only push

00:05:04.880 --> 00:05:11.120
around 32 gigabytes a second in either direction how could this thing possibly

00:05:09.360 --> 00:05:15.600
do over 100 that's the special sauce

00:05:13.120 --> 00:05:20.240
none of the storage data actually goes through the card

00:05:17.280 --> 00:05:25.199
that's the old way of doing raid cards this card only handles the raid

00:05:22.400 --> 00:05:29.199
calculations and directing the system where to read and write from

00:05:27.120 --> 00:05:33.039
all of the actual data flow just goes directly between the drives and the

00:05:31.680 --> 00:05:38.240
system memory no man in the middle and it does all of

00:05:35.680 --> 00:05:42.160
this while using barely any CPU resources

00:05:39.919 --> 00:05:46.479
or so they claim what i think this means is that we could

00:05:44.000 --> 00:05:50.400
plot this g-raid card into any system and it would just work we are definitely

00:05:48.880 --> 00:05:53.759
going to try that later but for now we're going to stick to the validated

00:05:51.919 --> 00:06:01.919
server that Gigabyte sent over for performance testing our cpus are a pair

00:05:57.039 --> 00:06:04.960
of epic 75 f3 monsters they're only 32

00:06:01.919 --> 00:06:06.720
core but they'll hit 280 watts of max

00:06:04.960 --> 00:06:11.600
power consumption and will boost to 4 gigahertz and then we paired these with

00:06:09.199 --> 00:06:17.199
some equally monstrous memory micron sent over a metric whack ton of 32 mega

00:06:14.960 --> 00:06:21.680
transfer per second ecc RAM giving us a total of one terabyte of system memory

00:06:20.240 --> 00:06:25.759
shouldn't be a bottleneck right i don't think so it should be fine

00:06:23.360 --> 00:06:29.759
then for our drives we're using kyocia cd6rs

00:06:27.360 --> 00:06:34.000
they're a well-balanced enterprise gen 4nbma drive and with 12 of them in here

00:06:32.319 --> 00:06:38.240
we should be looking at raw sequential performance of around 75 gigabytes a

00:06:36.639 --> 00:06:41.840
second before we can set up an array though we need to install the g-raid

00:06:39.919 --> 00:06:45.440
supreme raid software and also we need to like finish putting all the RAM in

00:06:43.199 --> 00:06:50.319
this system it still blows me away how low profile of a cooler they can use for

00:06:47.680 --> 00:06:54.960
these 280 watt cpus but that's the thing is under this giant heat spreader all

00:06:52.560 --> 00:06:58.560
the dyes right it's a chiplet design so they're actually like freaking spread

00:06:56.800 --> 00:07:01.120
out like it's it's huge it's a lot of surface area to transfer that heat

00:07:00.160 --> 00:07:05.440
brother these are these are 80 watt fans consume

00:07:04.160 --> 00:07:10.240
80 watts that's uh each 12 volts 7 amp

00:07:09.120 --> 00:07:12.800
i think that might be part of the equation

00:07:12.880 --> 00:07:21.199
wow that is a heavy vapor chamber i love it

00:07:17.680 --> 00:07:23.840
for how small this thing is like

00:07:21.199 --> 00:07:26.880
it's a heavy boy oh my god this is a lot of memory

00:07:24.880 --> 00:07:30.960
it's gorgeous a freaking terabyte

00:07:29.280 --> 00:07:33.840
why are you whispering because it's a terrible memory i don't want to wake it

00:07:32.319 --> 00:07:38.319
up as of recording the video this only runs

00:07:36.080 --> 00:07:41.360
on Linux server operating systems so we're going to be firing it up with do

00:07:39.840 --> 00:07:43.520
we have an SSD in here or something back here

00:07:44.560 --> 00:07:52.560
yeah just a little SATA SSD there cool so we're gonna be running ubuntu server

00:07:49.000 --> 00:07:54.080
20.04 lts uh Jake has already gone ahead

00:07:52.560 --> 00:07:58.479
and installed that onto our boot drive as well as the required NVIDIA drivers

00:07:56.639 --> 00:08:04.000
and supreme raid itself this is cool i am liking this like super

00:08:01.199 --> 00:08:07.599
over engineered airflow director here pretty sure they just ripped it off of

00:08:05.280 --> 00:08:11.199
the dell but oh all right

00:08:09.120 --> 00:08:14.400
that goes hey yeah that's not even close to full send yet

00:08:13.280 --> 00:08:19.039
either the whole process was surprisingly easy i literally just had to copy paste some commands from their

00:08:17.199 --> 00:08:22.960
user guide and it looks to be working so i think we can make it a right now

00:08:21.120 --> 00:08:27.840
should we start with raid zero i'd be obviously have to start with rate zero

00:08:24.960 --> 00:08:33.839
raid zero is not really a great use case for this because raid zero has no parity

00:08:30.800 --> 00:08:36.640
data to calculate it's just taking each

00:08:33.839 --> 00:08:42.080
bit writing it to the next drive in the sequence and attempting to multiply your

00:08:39.200 --> 00:08:46.480
capacity and your speed you get no extra resiliency whatsoever in fact it's worse

00:08:44.320 --> 00:08:50.240
because if any one drive fails all the data is

00:08:47.680 --> 00:08:54.000
gone oh hey look at that you can totally use this with satan sas drives too i

00:08:52.640 --> 00:08:58.560
don't know if you'd want to because i think the limit is 32 drives per

00:08:56.640 --> 00:09:02.959
controller thing i don't know what if you could put multiple i

00:09:00.560 --> 00:09:06.880
guess why why yeah good point why why why it's for NVMe yes okay so here let's

00:09:05.120 --> 00:09:11.920
see list NVMe drive let's see if they all show up so we got 7.7

00:09:09.760 --> 00:09:16.720
it's probably tibi bites or what capacity are those stripes 7.68 oh so

00:09:14.720 --> 00:09:19.279
then that is terabytes cool there's a bit of an

00:09:17.839 --> 00:09:22.399
it's not interesting because it's very similar to like zfs but there's kind of

00:09:21.519 --> 00:09:27.680
a structure so you start with your physical drives you know you got your NVMe drives you

00:09:25.600 --> 00:09:30.880
can also connect NVMe over fiber drives which is pretty cool and have the

00:09:29.120 --> 00:09:34.880
controller in this and your your drives and some other jbod pretty sick some are

00:09:33.200 --> 00:09:37.600
across we're not going to do that yet there is still a limit of 32 drives so

00:09:36.480 --> 00:09:42.320
it's not like you're going to connect 200 right but 32 with that done you can

00:09:40.880 --> 00:09:47.200
go ahead and make your drive group which is kind of like a zfs z pool or just

00:09:45.200 --> 00:09:51.440
right it's like your array you can pick your raid level at this stage uh you can

00:09:50.000 --> 00:09:54.720
have multiple drive groups i think you can have four um so you could have like

00:09:53.440 --> 00:10:00.480
say you had 16 drives you could have like four groups of four sure those would all be discrete

00:09:58.320 --> 00:10:05.120
unlike zfs they're not it's not like having four v devs that you then combine

00:10:02.880 --> 00:10:10.000
into a pool yeah it's like having four pools yeah let's go back a bit and

00:10:06.959 --> 00:10:11.839
actually create our physical drives

00:10:10.000 --> 00:10:16.320
it says create that's the command but really it's like take me take me over

00:10:14.399 --> 00:10:19.839
we're unbinding it from the operating system and giving it to the g-rate

00:10:18.640 --> 00:10:26.000
controller there's a cool little feature here you can go like dev slash NVMe 0-11 oh

00:10:24.240 --> 00:10:30.240
that's cool and it makes them all you don't have to do it one by one there you

00:10:27.839 --> 00:10:35.040
go made them all successfully and then we can just check the list to see if

00:10:31.839 --> 00:10:36.399
they're all there ah cool yes this is

00:10:35.040 --> 00:10:41.279
pretty solid documentation from what i've seen so far compared to the documentation for microsoft storage

00:10:39.440 --> 00:10:45.760
spaces oh god compared to the documentation for anything microsoft all

00:10:43.839 --> 00:10:50.079
right create drive group uh we're gonna do raid and then

00:10:48.240 --> 00:10:54.800
pd id so that's a physical disk so we'll go zero to eleven

00:10:52.480 --> 00:11:01.320
it's doing stuff let's see if we can see it now g-raid

00:10:56.800 --> 00:11:01.320
list drive underscore group

00:11:02.320 --> 00:11:08.480
92 terabytes that's not bad just like that hey yeah that's fast a little bit

00:11:06.480 --> 00:11:11.680
of an interesting tidbit here we don't have a usable file system yet this is

00:11:10.720 --> 00:11:16.720
just the array it's not like zetta fest where there's a file system built in instead

00:11:15.279 --> 00:11:20.640
we actually have to make a virtual disk so you can make

00:11:19.120 --> 00:11:24.000
a number of them you could have like a 10 terabyte one you could have like a 50

00:11:22.480 --> 00:11:30.000
terabyte one sure we're just gonna make one that's the whole thing this is just block level storage yeah so we'll make

00:11:28.160 --> 00:11:32.640
big virtual disks that's the full size and then we'll have to put a file system

00:11:31.440 --> 00:11:38.160
on it as well so let's do that create virtual drive

00:11:36.240 --> 00:11:42.480
okay so our drive group is zero so we'll say zero and then i'm not gonna specify

00:11:40.560 --> 00:11:45.760
a size and i think yeah that'll make a full-size one

00:11:43.760 --> 00:11:50.959
and there we go we have our virtual drive it says it's optimal and it's 92

00:11:48.560 --> 00:11:54.160
terabytes okay now i got to make a file system on

00:11:52.480 --> 00:11:57.839
it i already like like made these commands so i can just copy past them

00:11:56.160 --> 00:12:01.839
got the file system working i think it was a little bit angry about how i had

00:12:00.079 --> 00:12:07.440
previously had a file system on these disks and just deleted it made a new one

00:12:04.320 --> 00:12:09.920
anyways i deleted everything rebooted it

00:12:07.440 --> 00:12:13.680
created it again now it's happy i also realized that because we now have two

00:12:12.000 --> 00:12:17.920
dimms per channel previously when i was just kind of tinkering with this i just

00:12:15.360 --> 00:12:21.760
had 16 sticks in which is eight per one per channel now we have twice that

00:12:20.079 --> 00:12:27.040
usually that means your memory speed is going to go down fortunately on the Gigabyte servers you can force it to be

00:12:25.200 --> 00:12:31.920
full speed captain we're doing 32q depth one meg sequential

00:12:29.600 --> 00:12:35.519
read and that's with 24 threads so two per NVMe drive that's usually pretty

00:12:36.839 --> 00:12:40.560
standard oh i hear it

00:12:41.600 --> 00:12:47.920
holy it is straight up just immediately okay

00:12:46.240 --> 00:12:52.000
it did go down a bit it's it's still twice as fast as what

00:12:50.079 --> 00:12:56.720
we've seen with CPU raid though yeah look at why ice is fast the CPU usage is

00:12:54.320 --> 00:13:00.320
only like and it's barely touching the CPU three four percent that might even

00:12:58.560 --> 00:13:05.040
just be like fio but actually it says it's system stuff so it probably isn't

00:13:02.399 --> 00:13:08.800
oh that's crazy it looks like we've looked you can hear

00:13:06.720 --> 00:13:12.320
the fans going though yeah it knows it's doing something it looks like we've

00:13:10.480 --> 00:13:16.160
leveled off around 40 gibby bites a second which is yeah it's basically twice what

00:13:14.959 --> 00:13:21.760
you could get with zetta fest pretty much that's insane what's to be more

00:13:19.600 --> 00:13:27.360
interesting is the rights because in like traditional raid 5 it's really CPU

00:13:24.720 --> 00:13:31.200
intensive to write you'll get like a 10th the performance of your read

00:13:28.880 --> 00:13:35.040
speed so if that's still good that will be this is grade 0 anyway

00:13:33.519 --> 00:13:40.399
though yeah so do we even care should we just switch to right five switch to raid five okay okay

00:13:39.040 --> 00:13:43.920
to be clear there are potential disadvantages of

00:13:42.240 --> 00:13:48.240
going with the solution like this one of the great things about zfs is its

00:13:46.079 --> 00:13:52.720
resiliency we have actually made significant progress big shout out

00:13:50.639 --> 00:13:57.519
wendell from level one techs by the way on the data restoration project from our

00:13:55.440 --> 00:14:01.519
failed zfs pools and uh we're gonna have an update for

00:13:58.959 --> 00:14:05.760
you guys we're down from like 169 million file errors to like six thousand

00:14:04.320 --> 00:14:10.480
so make sure you're subscribed so you don't miss that and i cannot necessarily

00:14:08.240 --> 00:14:13.920
say the same thing about whatever this is doing well the other thing is we're

00:14:11.760 --> 00:14:18.160
also locked into like their ecosystem now this is very like proprietary

00:14:15.600 --> 00:14:22.959
software we were able to take these zfs vdevs and pools and just like

00:14:20.800 --> 00:14:27.360
import them into trunas yeah even the ones i just did delta one and two those

00:14:25.199 --> 00:14:33.040
pools are from like 2015 new hardware new software new version of zfs

00:14:30.720 --> 00:14:36.880
i imported those 2016 pools it took like literally 30 minutes for it to import

00:14:34.639 --> 00:14:41.760
which is a scary 30 minutes but it just it did it do you want to do raid 5 or

00:14:38.320 --> 00:14:43.680
raid 10. raid 5 raid 10 is lame i mean

00:14:41.760 --> 00:14:47.360
it might be lame but it's fast well no it's not okay i shouldn't say it's lame

00:14:45.199 --> 00:14:52.800
there's a time and a place for raid 10. let me walk you through with raid 10 i

00:14:49.839 --> 00:14:56.560
get only 6 drives worth of capacity that's it the rest is all redundant

00:14:55.120 --> 00:15:01.040
which is great because all these could fail and i'd still have all my data but

00:14:58.880 --> 00:15:07.279
it's bad because that's expensive with raid 5 i get 11 drives worth of data

00:15:04.800 --> 00:15:11.199
but i can only sustain one failure on these kinds of solid state enterprise

00:15:09.440 --> 00:15:15.839
class devices though they usually all fail at the same time

00:15:13.760 --> 00:15:19.199
should be okay yeah you're going to want to have a backup you got a backup that's

00:15:17.920 --> 00:15:24.480
for i mean what's our backup within about 30 seconds or two minutes or something like that anyway yeah yeah

00:15:22.720 --> 00:15:31.600
we're on our raid 5 array now i'm going to be doing basically the same test

00:15:27.040 --> 00:15:34.000
wow me 32 32 qs one meg read sequential

00:15:31.600 --> 00:15:36.959
world of warcraft my whole face you see that's it's only 18 gigabytes a second

00:15:35.680 --> 00:15:40.800
that's not that bad wait what really 18 gigs a second what's

00:15:39.279 --> 00:15:47.519
our CPU usage 1.7 1.9 okay wow but wait now it's

00:15:45.199 --> 00:15:51.279
getting faster wait what happened did you know that was gonna happen yeah

00:15:49.199 --> 00:15:56.959
oh well i watched it happen earlier it does like two steps okay so it's written

00:15:54.800 --> 00:16:00.240
you know like 30 gigs and then it starts going

00:15:59.040 --> 00:16:07.040
and then there'll be kind of one more bump where it'll go like above 30 gigabytes a second wow

00:16:04.000 --> 00:16:09.600
that's freaking crazy with grade 5.

00:16:07.040 --> 00:16:13.440
still at 2.5 in CPU usage there we go 35 gigs a

00:16:12.639 --> 00:16:18.639
second we're at almost three percent CPU now remember this is a read test so it'll be

00:16:17.360 --> 00:16:22.320
interesting to see what the right is like yeah that's that's pretty quick 35.

00:16:20.720 --> 00:16:26.000
that's really fast 35 gigabytes of snapchat switching to this we might

00:16:24.160 --> 00:16:30.720
switch to this i want to try to put it in the wanix server because then we can

00:16:27.839 --> 00:16:34.320
use the same ssds right yeah that would be even faster but i'd have

00:16:32.320 --> 00:16:40.079
to do that on a weekend that's Jake asking for overtime on camera

00:16:37.360 --> 00:16:44.480
yeah no i don't want to work i have zero desire to do that okay let's

00:16:43.040 --> 00:16:48.560
try right now because that's really where we're gonna see the difference wow

00:16:46.320 --> 00:16:53.440
CPU usage is like 20 that's pretty hefty yeah

00:16:51.120 --> 00:16:57.440
and we're only doing five gigs a second that's not actually that great less

00:16:55.920 --> 00:17:03.040
impressive so CPU usage is actually going down

00:17:00.800 --> 00:17:08.559
while performance goes up yeah it's more like 12 CPU right now 11

00:17:07.280 --> 00:17:13.360
10. it's like it takes a second to like

00:17:11.679 --> 00:17:19.679
what am i doing where am i where am i putting stuff yeah like it needs a ramp

00:17:15.360 --> 00:17:20.559
up he's over there i'm coming back

00:17:19.679 --> 00:17:26.959
okay seems like it's leveled off around nine gigabytes a second with around nine to

00:17:24.160 --> 00:17:30.000
ten percent CPU usage so pretty good CPU usage

00:17:27.919 --> 00:17:34.000
still very strange i mean it's very acceptable

00:17:32.400 --> 00:17:39.200
in terms of performance i mean that's a gibby gibby or maybe bytes

00:17:37.520 --> 00:17:42.160
so it's probably closer to about 10 gigabytes a second

00:17:40.640 --> 00:17:46.400
should we try like a random test it'd be interesting to see how many iops because that's another thing software raid will

00:17:45.039 --> 00:17:51.440
struggle with yeah let's do that look at this guys like those wide stance here man

00:17:50.160 --> 00:17:57.039
spreading just trying to be a little more ergonomic here this is gonna be 4k

00:17:54.480 --> 00:18:00.480
random read we're doing 48 threads a little bit more and a q depth of 64 this

00:17:59.200 --> 00:18:04.960
time okay let's see oh that's that's

00:18:03.200 --> 00:18:10.320
usage though this is like the absolute

00:18:07.440 --> 00:18:15.200
most punishing test you can do and we're pulling off

00:18:11.600 --> 00:18:18.400
six and a half million iops on a raid 5

00:18:15.200 --> 00:18:20.799
and actually 25 gigs a second at 4k

00:18:18.400 --> 00:18:25.440
block size holy that's insane

00:18:22.880 --> 00:18:30.320
oh my god so the theoretical performance of these drives would put us at around

00:18:27.440 --> 00:18:34.000
12 million iops like raw to each of them that's

00:18:31.320 --> 00:18:36.880
insane pretty good if we were on an Intel based system we might actually be

00:18:35.360 --> 00:18:42.799
able to get a little bit more uh or with Intel drives but yeah

00:18:39.760 --> 00:18:43.679
dang that CPU usage is staying

00:18:42.799 --> 00:18:49.200
high i can tell you just from the temperature of the backplate though that GPU is at

00:18:47.280 --> 00:18:53.919
work we can look at it actually so it's at 70 degrees

00:18:52.240 --> 00:18:59.440
and considering the kind of airflow going over it right now the interesting

00:18:56.960 --> 00:19:03.120
thing is the GPU usage just it stays at 100 even if you're not using the array

00:19:01.360 --> 00:19:08.320
it's kind of weird i wonder if that's like uh

00:19:04.480 --> 00:19:10.720
look the fan is spinning it's 55

00:19:08.320 --> 00:19:14.559
it just has like ah it's like you you have like a little

00:19:12.160 --> 00:19:19.280
desk fan inside of like a like a hurricane yeah a wind tunnel just going

00:19:16.960 --> 00:19:23.760
past it oh thank you for the cooling yeah

00:19:22.080 --> 00:19:27.360
let's try rights same specs everything else let's just

00:19:25.280 --> 00:19:31.600
give it a sec to no no sec

00:19:28.799 --> 00:19:36.640
no sex no i mean i mean that's a camera

00:19:35.120 --> 00:19:40.960
and that's the kind of operation we're in that's one

00:19:38.320 --> 00:19:46.880
yeah it's writing though that's way harder so doing one million iops writing

00:19:44.000 --> 00:19:52.080
so these drives are only rated for 85k random right well that's really wow

00:19:50.000 --> 00:19:58.480
so that's that's actually almost perfect stealing yeah that's freaking incredible

00:19:55.360 --> 00:20:01.360
so if we do 85 times 12

00:19:58.480 --> 00:20:05.520
it's almost perfect scaling this that's probably the most impressive

00:20:03.039 --> 00:20:11.520
test we've seen so far then yeah that's crazy

00:20:08.400 --> 00:20:13.679
what's cool about the wrights still

00:20:11.520 --> 00:20:17.280
maxing out these drives though is that because people are actively editing off

00:20:15.840 --> 00:20:21.679
of these drives while you are dumping copious amounts of

00:20:19.520 --> 00:20:26.080
data onto them this could make a huge difference to

00:20:23.520 --> 00:20:30.080
footage ingest this is freaking crazy we're still gonna

00:20:27.840 --> 00:20:35.760
run into a huge bottleneck that is samba but maybe once they have smb direct on

00:20:32.080 --> 00:20:37.600
Linux like rdma support

00:20:35.760 --> 00:20:41.120
we kind of have to deploy this yeah i think so

00:20:39.280 --> 00:20:44.720
let's find out if we can oh yeah okay so that's why the bench is

00:20:43.440 --> 00:20:49.679
here i got a threadripper bench here we're gonna just put that card in it i got us a little NVMe

00:20:48.799 --> 00:20:55.679
drive we're kind of clashing brands here this is like wearing adidas and nike in the

00:20:53.039 --> 00:20:59.760
same outfit we got our our liquid drive this has got four NVMe drives on it and

00:20:57.600 --> 00:21:03.840
then just like a little plx switch so we'll put that in there and then we can

00:21:01.360 --> 00:21:08.559
raid those four drives okay it's even less sophisticated than i thought yeah

00:21:06.080 --> 00:21:11.120
it's just that's not on a normal card there's no like it's just this piece of

00:21:10.159 --> 00:21:16.720
metal so then they just

00:21:13.280 --> 00:21:18.799
probably sourced a random pci it looks

00:21:16.720 --> 00:21:23.360
the same as the regular NVIDIA one just because like these slots are the same

00:21:20.559 --> 00:21:27.039
but there's no cutouts yeah okay sure i'm gonna go get a cooling fan because

00:21:24.960 --> 00:21:31.919
that looks awful which one the one in there oh he's fine

00:21:29.440 --> 00:21:36.159
no he's not fine Jake he'll be all right

00:21:33.280 --> 00:21:40.240
he's a good guy so let's see describe license and see if our license is still

00:21:37.840 --> 00:21:45.679
valid license is still valid well that's awesome interesting

00:21:42.720 --> 00:21:48.480
okay yeah it's these are nvmes interesting

00:21:47.360 --> 00:21:52.799
oh these don't support 4k block size

00:21:51.919 --> 00:21:58.960
oh do we do we think this is all we really needed to know it boots it runs

00:21:57.120 --> 00:22:04.080
it probably works let's just boot into Windows and we'll get the last the last

00:22:01.600 --> 00:22:09.520
answer holy sh it's their Linus what NVIDIA t1000 is device manager it's just

00:22:06.960 --> 00:22:13.200
a freaking GPU i just want to see if this BIOS version matches any of the

00:22:11.600 --> 00:22:18.559
BIOS versions in the tech powerup database because if it does chances are oh

00:22:17.679 --> 00:22:26.720
hello uh yeah i'm pretty sure that just worked

00:22:22.480 --> 00:22:28.799
gc display connected no way oh my god it

00:22:26.720 --> 00:22:32.400
just works it's a graphics card well i mean we knew it was a graphics

00:22:30.559 --> 00:22:37.039
card it's a functional display outputting graphics card

00:22:35.440 --> 00:22:40.400
okay i got to see this BIOS thing now it's a pny t-1000 so they must be who

00:22:39.280 --> 00:22:47.760
the uh pny is the exclusive manufacturer of

00:22:42.640 --> 00:22:49.440
quadro or excuse me NVIDIA rtx

00:22:47.760 --> 00:22:52.960
works it's exactly the same in every other metric

00:22:50.799 --> 00:22:57.679
GPU accelerated raid what world are we even living in

00:22:55.039 --> 00:23:02.640
it's just a regular ass GPU and it's not even a crazy powerful one like this is a

00:23:00.080 --> 00:23:06.960
basically what like a 1650 yeah something like that literally a 1650

00:23:04.640 --> 00:23:11.360
same same silicon basically so what can we just run can we just run g-rate on

00:23:09.600 --> 00:23:15.559
like freaking like a6000

00:23:15.760 --> 00:23:22.159
so maybe it just completely doesn't care what GPU should we put a different GPU

00:23:19.919 --> 00:23:26.240
in that should i go get a GPU there's got to be some ballad i'm going to get a

00:23:24.080 --> 00:23:29.840
gp okay well what i'll be back do we have an a anything i'll just get like a

00:23:28.080 --> 00:23:35.200
quadro or something or a t would be better touring excuse me jesus

00:23:33.919 --> 00:23:39.360
it took a little while to generate the rendering kernels but uh

00:23:36.960 --> 00:23:43.760
it blenders oh i think um i found something it's pretty fast too oh my god

00:23:42.159 --> 00:23:46.159
are we going to have g-raid check that license

00:23:46.559 --> 00:23:51.440
so let's see let's first see if

00:23:49.600 --> 00:23:54.799
the server is okay it's not running on the GPU you see

00:23:53.520 --> 00:23:59.600
it would it would show the running process so let's see it might not work

00:23:57.679 --> 00:24:05.360
i don't think it's going to bomber that would be so funny

00:24:03.039 --> 00:24:11.360
i think the license is done per GPU oh i wonder if it

00:24:09.200 --> 00:24:15.120
as part of applying the license that binds it to it

00:24:12.960 --> 00:24:18.559
it's relatively unsophisticated it makes you kind of wonder why they would even

00:24:16.640 --> 00:24:22.880
bother at that point but money no no i don't mean that like

00:24:20.720 --> 00:24:26.960
licensing is a waste of time i just mean like it's relatively unsophisticated if

00:24:24.799 --> 00:24:31.840
you wanted to spoof that yeah you can do it yeah yeah

00:24:28.240 --> 00:24:33.919
huh oh i'm disappointed i wanted to like

00:24:31.840 --> 00:24:37.679
throw eight times the compute at it and see what it would do

00:24:36.000 --> 00:24:41.279
oh well to be fair the game is not launching watch g-raid reach out to us

00:24:40.000 --> 00:24:46.320
after this and be like yeah we can hook you up with that and there we go continue campaign sure

00:24:44.720 --> 00:24:51.440
five percent how do i play this uh with a controller this is a good game

00:24:49.520 --> 00:24:55.279
you should totally play it though it's really there's actually no way to

00:24:52.799 --> 00:24:58.320
play with the keyboard i don't know it kind of looks like i like wasd or

00:24:57.279 --> 00:25:04.559
like shift god damn it all right i'm glad i told rocket league to keep downloading

00:25:02.000 --> 00:25:07.279
this doesn't launch either what the hell yeah the t-1000 is behaving kind of

00:25:06.000 --> 00:25:12.799
weird but it can game we launched broforce it's fine and you know what else is fine our sponsor

00:25:11.200 --> 00:25:17.039
thanks to telus for sponsoring today's video we've done a ton of upgrades to my

00:25:15.039 --> 00:25:22.320
house over the past few months and probably the most important one to me is

00:25:19.279 --> 00:25:24.559
our new telus pure fiber x connection

00:25:22.320 --> 00:25:28.799
telus pure fiber x comes with telus's wi-fi 6 access point and can get you

00:25:26.720 --> 00:25:33.679
download and upload speeds of up to two and a half gigabit per second that is

00:25:30.960 --> 00:25:37.600
the fastest residential freaking internet speeds that you can get in

00:25:35.279 --> 00:25:41.919
western canada and is perfect for multiple devices and simultaneous

00:25:39.760 --> 00:25:47.520
streaming with the new consoles out it means that you can download a new 50

00:25:44.480 --> 00:25:51.600
Gigabyte game in less than three minutes

00:25:47.520 --> 00:25:53.840
or a 5 Gigabyte 1080p movie in just 16

00:25:51.600 --> 00:25:57.760
seconds assuming that the connection on the other side can keep up with you

00:25:55.760 --> 00:26:01.760
you'll also get to enjoy an upload speed that's up to 25 times faster than

00:26:00.240 --> 00:26:05.679
competitors which means that your streams will be crystal clear i mean you

00:26:03.840 --> 00:26:10.960
could be streaming in like freaking 8k at that point so get unparalleled speed

00:26:08.159 --> 00:26:16.240
on canada's fastest internet technology with telus purefibre x by going to

00:26:12.960 --> 00:26:18.159
telus.com purefibrex

00:26:16.240 --> 00:26:22.480
if you guys enjoyed this video this little SSD from liquid here is hardly

00:26:19.919 --> 00:26:28.240
the most potent we built a server using five i think of their like way bigger

00:26:25.600 --> 00:26:34.799
eight SSD honey badgers it's called the badger den and it's freaking amazing

00:26:31.600 --> 00:26:34.799
that's crazy
