100TB at over 1GB/s - The "Storinator" is back!
Linus Tech Tips
·Linus Tech Tips
·2016-05-06
·
1,704 words · ~8 min read
0:00
way back before the Big Move I built a
0:04
100 plus terabyte storage server to
0:08
replace the awful store data on
0:11
disconnected drives on a shelf in the bathroom system that we were rocking
0:16
before thanks Seagate for the awesome drives and thanks 45 drives.com for the
0:20
Rock and personalized storinator server but some of you may have noticed that I
0:25
never followed up on the performance testing that I promised to do on that
0:30
machine I was supposed to be showing off
0:33
1 Gigabyte per second transfers with the
0:36
10 GB network setup what gives well
0:40
today we finally get the whole
0:51
story the master case 5 by Cooler Master gives you the freedom to truly make your
0:56
midtower PC case your own with a variety of modular parts and access
1:00
check out the link in the video description to learn more so the short
1:04
version is this in spite of 45 drives
1:07
telling me that they had customers with similar configs saturating a 10 GB link
1:12
or more I couldn't even get half of that
1:16
and it made no sense really I did a lot of tinkering with this box before
1:20
eventually deploying it different network cards different Drive
1:24
configurations and finally got to the point where it was whether freas
1:29
Hardware or pbac I had to roll it out
1:33
because we needed to put our data somewhere and I was just going to have
1:37
to live with the results that I got I mean I know I know poor lonus only has
1:42
300 to 350 megabyte per second speeds to
1:46
his over 100 terabytes of safe storage
1:50
boohoo but this disrupted my plans for
1:53
our storage infrastructure in a bigger way than you might think in addition to
1:58
archiving old stuff to the server my intention was to have our daily use NZ
2:04
the SSD one that you probably remember from this video doing nightly syncs or
2:09
even hourly checkpoints if we could get away with it so we'd have two full
2:13
copies of all of our mission critical data so I wanted the magnetic NZ to be
2:19
fast enough to handle that and any
2:22
random data that our editors needed to read from it from old projects which we
2:27
were not able to do so while I've had
2:31
four months to diagnose this and Ponder what could be wrong because it's had up
2:36
to 60 terabytes of important data on it
2:40
with nowhere else to offload that I've had no choice but to just Lim along at
2:46
300 350 megabytes per second until
2:52
today Seagate sent us 35 of their new 8
2:57
terabyte Enterprise capacity drives and no these are not the shingled platter
3:02
archival ones these are rocking ass capable of well in excessive 200
3:06
megabytes per second transfer speeds rated at 2 million hours meantime
3:11
between failure and with a 5year warranty to back it up proper Enterprise
3:16
capacity drives so I immediately tore
3:19
them out of their packaging and began building pyramids that no just kidding
3:24
well actually okay I did build pyramids but but what I actually built with them
3:27
after the pyramids was two additional servers each to hold a copy of our 60
3:33
terabytes of data while I worked on the
3:36
Vault so one of those machines is actually eventually going to be a Nas
3:39
unit at my house and the other one is going to be an off-site backup server
3:44
for this puppy but each of those will get their own videos later so with the
3:49
data safely stored on a hardware raid 6 and on a software butter FS raid 5 each
3:54
of those transfers took over a day by the way I wiped the frez and be began
4:00
trying things so first I tried six Drive
4:03
vevs since that's a more optimal number for ZFS 2 nope still shoddy transfer
4:09
speeds next I tried 10 Drive vevs no
4:13
difference again finally in desperation
4:16
I tried a 27 Drive raid zero an
4:21
experimental class configuration that no one should trust to hold any data no
4:26
matter how amazing the drives are and
4:30
same thing which after talking to the folks at 45 drives about my findings
4:35
revealed that the issue is probably a
4:38
software one because they've seen NFS shares just fly in a similar
4:43
configuration to mine which doesn't do me any good because this is a Windows
4:47
environment and we need SMB shares and
4:50
so I had to keep investigating because if I'm going to be running around saying
4:53
this Nas unit in these drives are capable of over a Gigabyte per second of
4:57
transfer speed we use them here at l Media Group I mean I'm basically
5:01
endorsing the things it's not good enough to me for 45 drives to see it in
5:06
their lab I need to see it so I've been
5:09
chatting a lot with the unraid guys ever since they helped us do the two Gamers
5:13
one CPU project which you should definitely check out if you haven't
5:16
already and they offered to spend some time configuring an experimental raid
5:22
five butter FS array in un raid and
5:25
tuning both the network settings as well
5:28
as the SMB share settings so our initial
5:32
test on a vanilla unraid server was frankly pretty ho hum actually fairly
5:37
similar there's that poor SMB optimization outside of Windows
5:41
platforms rearing its ugly head again
5:44
some 4 kilobyte packet and jumbo frame tuning to the network card tuning of
5:48
unraid networking configuration and boom
5:51
that my friends is the cleanest 10 gigabit transfer that I've actually ever
5:57
seen now not a lot of lime Tech
6:00
customers are running 10 gig e but from their perspective I guess it's just
6:03
valuable R&D for down the road when that gear becomes more common but I mean even
6:08
then this is not the kind of config that most people will encounter even on
6:13
unrated I actually don't intend to continue to run it like this uh butter
6:18
FS raid five and raid six are both in the experimental stage but the good news
6:23
here is that what I realized after running the slow freas configuration for
6:28
so long was that generally speaking I don't need
6:33
more than the 200 to 220 megabyte pers
6:36
second transfer speeds that my individual drives are capable of in a
6:41
normal unraid array and that the only
6:44
thing that needs to be lightning fast performance-wise is the new footage and
6:49
projects that we offload to it relatively little of which is created on
6:53
a daily basis so we devised a new plan
6:57
and to help us realize the new plan Kingston stepped up and offered to send
7:02
us eight of their e50 Enterprise grade 480 gig ssds with power loss protection
7:09
these drives will act as a 2 tbte RAID
7:13
10 right cache that will be capable of the full 10 gbit transfer rate for fast
7:19
updates throughout the day and that then flushes nightly to the hard drives when
7:25
no one is using them all of this can be completely transparent to the user so
7:30
the only time we'll ever see sub 1 GB
7:33
per second transfers is when we're accessing cold data or when doing a
7:38
massive dump of over 2 terabytes at a
7:41
time another cool side note is that this might turn out to be a better way to
7:45
leverage the extra horsepower that this op server is leaving on the table anyway
7:51
because she never touches more than about 20% CPU usage so I could take a
7:56
couple of cores and turn them into a network rendering box or game server or
8:01
something else and on the subject then
8:04
of our server being op I guess that brings us to the conclusion it turns out
8:08
that the hardware is but SMB shares on
8:12
non Windows platforms take some tuning and optimization that if you're willing
8:17
to endure the dense documentation and condescending attitude of the freenas
8:21
community you could probably achieve there but instead I ended up working
8:25
directly with lime Tech to have baked into an upcoming release of unraid 6 and
8:28
I'm super happy with the new
8:32
config if you're building a mobile app and searching for a simple payment
8:36
solution you might want to check out brain tree with the brain tree v.0 SDK
8:41
which is just one small snippet of code you can be all set up and ready in less
8:46
than 10 minutes to take online payments and if you're having any trouble they
8:50
have support staff ready to walk you through the process over the phone if
8:53
you need them their code supports Android iOS and JavaScript clients and
8:58
they have sdks in in seven programming languages and it makes it easy to offer
9:03
multiple Mobile payment types including PayPal Apple pay Bitcoin venmo cards and
9:10
more all with a simple integration they've got quick knowledgeable
9:14
developer support if you have any questions and to learn more all you got
9:17
to do is go over to Braintree payments.com Linus and if you use that link you can
9:23
also get your first $50,000 in transactions with no fees
9:28
whatsoever so check it out today at the link in the video description so thanks
9:32
for watching guys if this video sucked you know what to do but if it was
9:36
awesome get subscribed hit the like button like button hit the like button
9:42
or even consider supporting us directly by using our affiliate code to shop at
9:46
Amazon instructions for which we're up here buying a cool shirt like this one
9:49
or with a direct monthly contribution through our form it gets you a little contributor tag now that you're done
9:54
doing all that stuff you're probably wondering what to watch next so click that little button in the top right
9:58
corner to check out the ultimate showdown between AMD's 8 core CPU and
10:04
Intel's 8 core server anyway it's complicated you guys should check it out
10:08
the video is there