1
00:00:00,200 --> 00:00:07,000
Computer problems are a fact of life. And sometimes the fix is as simple as just turning it off

2
00:00:07,000 --> 00:00:11,280
and turning it back on again, but other times it's not.

3
00:00:11,280 --> 00:00:15,440
And when the system you're talking about is running an air traffic control system,

4
00:00:15,440 --> 00:00:19,120
controlling a bunch of ATMs, or say routing 911 calls,

5
00:00:19,120 --> 00:00:22,440
keeping them up and running can be a matter of life and death.

6
00:00:22,440 --> 00:00:25,840
Now, the stakes aren't nearly as high for us,

7
00:00:25,840 --> 00:00:30,360
but this server here runs multiple apps that we rely on every day,

8
00:00:30,360 --> 00:00:35,520
accelerates our game downloads with Steam caching, and it runs our DNS.

9
00:00:35,520 --> 00:00:39,760
If that service goes down, it breaks literally everyone in the company's internet,

10
00:00:39,760 --> 00:00:44,480
which my boss informs me, isn't great. So how do we make it more reliable?

11
00:00:44,480 --> 00:00:48,480
It's already a server. We build more servers.

12
00:00:48,480 --> 00:00:52,440
And what's really cool about this is everything we're about to show you,

13
00:00:52,440 --> 00:00:58,000
courtesy of Intel, who sponsored this video and sent over their new Emerald Rapid Xeon CPUs

14
00:00:58,000 --> 00:01:02,760
can be done on nearly any computer, even your dad's old Dell.

15
00:01:02,760 --> 00:01:06,320
That is, as long as you have more than one. So if one leaves for cigarettes,

16
00:01:06,320 --> 00:01:07,560
we can still play catch.

17
00:01:09,640 --> 00:01:12,880
More than one Dell, not more than one dad.

18
00:01:12,880 --> 00:01:16,320
Oh. Well, anyways, I'm done. Do you want to check this out?

19
00:01:16,320 --> 00:01:21,400
Yeah, let's have a look. You got your lovely cat picture, your crab rave on that computer.

20
00:01:21,400 --> 00:01:24,560
Watch this. Like I can, yeah, I can interact with this.

21
00:01:24,560 --> 00:01:27,180
Let's just give it a second. Okay. It's going.

22
00:01:28,140 --> 00:01:31,860
Now it's on this computer and like no bamboozle. Here, look, watch.

23
00:01:31,860 --> 00:01:35,780
Whoa, buddy. Watch, watch, watch. Boom, unplugged.

24
00:01:35,780 --> 00:01:39,500
I can just completely interact with this as I normally would.

25
00:01:39,500 --> 00:01:43,460
So what's going on here? What you guys just saw was the programs,

26
00:01:43,460 --> 00:01:46,500
the lovely drawing, the entire operating system,

27
00:01:46,500 --> 00:01:50,980
just teleporting from the computer over here to the one over here.

28
00:01:50,980 --> 00:01:55,700
No trickery. This is possible thanks to the magic of virtualization.

29
00:01:55,700 --> 00:02:01,300
We've talked about it before, but if you're not familiar, virtualization allows you to slice up a single machine

30
00:02:01,300 --> 00:02:04,860
into multiple less powerful virtual machines.

31
00:02:04,860 --> 00:02:10,100
And this setup leverages that technology to allow us to move these virtual machines

32
00:02:10,100 --> 00:02:13,280
between multiple physical computers.

33
00:02:13,280 --> 00:02:17,380
That way if one breaks, another one can immediately take its place.

34
00:02:17,380 --> 00:02:20,600
And the best part is that, well, this all sounds super fancy.

35
00:02:20,600 --> 00:02:24,060
All the software we're using is both open source and free.

36
00:02:24,060 --> 00:02:28,300
And we're going to show you guys how the setup works in a little bit. First, I want to take a look at the servers

37
00:02:28,300 --> 00:02:35,220
we're going to be using for our setup. Gigabyte sent over four of their R163 SG2 AAC1 servers.

38
00:02:35,740 --> 00:02:39,240
These are bare bones. So we're going to have to add a few of our own parts,

39
00:02:39,240 --> 00:02:43,580
but we should be able to build this in what? Like five minutes?

40
00:02:43,580 --> 00:02:47,100
I'd like to see you try. This guy, we're going to add some of our own parts,

41
00:02:47,100 --> 00:02:50,380
starting with a pair of Patriot 480 gig SATA SSDs

42
00:02:50,380 --> 00:02:55,540
that will function as a mirrored boot drive. This kind of per machine redundancy

43
00:02:55,540 --> 00:03:00,700
isn't strictly speaking necessary because we could lose an entire machine

44
00:03:00,700 --> 00:03:04,480
in our configuration without having any issues. But having them in pairs

45
00:03:04,480 --> 00:03:10,040
makes our lives easier in the future potentially. Since if one of them fails, we can just replace it

46
00:03:10,040 --> 00:03:13,260
and then rebuild it from the other one. Then on the other side of the machine,

47
00:03:13,260 --> 00:03:17,180
we're installing two of these Kyoksia CD6 7TB drives

48
00:03:17,180 --> 00:03:22,260
for fast bulk storage. That leaves us six more SATA bays to do nothing with

49
00:03:22,260 --> 00:03:25,180
and two more NVMe bays for potential future expansion.

50
00:03:27,020 --> 00:03:33,060
Moving back, let's get our CPU installed. We're using a Xeon Platinum 8562Y Plus in each node.

51
00:03:33,060 --> 00:03:37,140
These were graciously provided by Intel and with 32 cores, 64 threads

52
00:03:37,140 --> 00:03:41,420
and 4.1 gigahertz max turbo clock speeds. These are going to give us a ton of compute

53
00:03:41,420 --> 00:03:46,100
to share between our virtual machines, all at a modest 300 watt TDP.

54
00:03:46,100 --> 00:03:51,340
We're going to have it and the rest of the parts linked in the video description. Now, I've never installed in this socket before,

55
00:03:51,340 --> 00:03:56,780
so good luck me. Step one is to install the carrier on the CPU

56
00:03:56,780 --> 00:04:02,260
and you can tell which one of the three you're supposed to use by the little marking right there on the CPU IHS.

57
00:04:02,260 --> 00:04:07,780
Line up our little golden triangle with our gigantic gargantuan hole in the whole thing triangle.

58
00:04:07,780 --> 00:04:12,120
Oh, this is adorable. It's got a cute little ARM so you can break the thermal paste seal with the cooler

59
00:04:12,120 --> 00:04:15,700
so you can get the cooler and the CPU separated more easily. Love to see it.

60
00:04:15,700 --> 00:04:20,620
Speaking of thermal paste, we're going to be using a Honeywell PTM7950 pad,

61
00:04:20,660 --> 00:04:25,620
available at lttstore.com. This stuff is absolutely perfect for a server install

62
00:04:25,620 --> 00:04:31,520
because it lasts not forever, but for a very, very long time without maintenance.

63
00:04:31,520 --> 00:04:37,260
Now, you might think, okay, go ahead, put it onto the CPU socket, you'd be wrong.

64
00:04:37,260 --> 00:04:40,640
Instead, I'm going to install it onto the cooler.

65
00:04:40,640 --> 00:04:46,700
I'm going to know how to do that in a sec. So arrow and arrow.

66
00:04:46,700 --> 00:04:51,700
So maybe, ah, ah, ah, hey, there we go.

67
00:04:54,180 --> 00:04:55,840
Damn, look at that vapor chamber.

68
00:04:57,500 --> 00:05:02,060
Love me a vapor chamber. Okay, we're going to make sure all these are clicked into place.

69
00:05:02,060 --> 00:05:06,460
Look for our little arrow here. Line that up with the arrow on the socket

70
00:05:07,500 --> 00:05:11,460
and make sure that the locks are in their unlocked position

71
00:05:11,460 --> 00:05:15,140
then you should be able to just... That's it's locked.

72
00:05:15,140 --> 00:05:18,340
Oh, that's it. Okay, next comes something you don't see me do very often

73
00:05:18,340 --> 00:05:21,500
and that is use a screwdriver other than the LTT screwdriver.

74
00:05:21,500 --> 00:05:24,620
And that's because these need to be torqued to a specific value.

75
00:05:24,620 --> 00:05:26,700
That is 6.9 inch pounds.

76
00:05:29,820 --> 00:05:34,500
Nice. It's so cool to think that if I was doing this, you know, performing maintenance on the server,

77
00:05:34,500 --> 00:05:38,260
upgrading a bad RAM stick, our entire operation could be chugging along

78
00:05:38,260 --> 00:05:43,620
as if nothing happened. Speaking of RAM, we've gone with four 96 gig micron,

79
00:05:43,660 --> 00:05:47,220
5,600 megatransfer per second registered ECC dims.

80
00:05:47,220 --> 00:05:51,100
That's a somewhat unconventional choice because especially in a server,

81
00:05:51,100 --> 00:05:55,220
giving up half of the memory channels means that we will be giving up some performance,

82
00:05:55,220 --> 00:05:58,660
but we don't really need all of the performance for now

83
00:05:58,660 --> 00:06:03,500
and 384 gigs is a ton of capacity for our needs at the moment.

84
00:06:03,500 --> 00:06:09,940
And of course, if anything changes, we can always add more without any downtime to our services.

85
00:06:09,940 --> 00:06:13,580
The only thing that's really important here then is making sure that we install our sticks

86
00:06:13,580 --> 00:06:18,900
in the correct slots, which is not always super intuitive, so make sure to consult the manual.

87
00:06:18,900 --> 00:06:22,340
We don't need a GPU for now, though we could add one in the future.

88
00:06:22,340 --> 00:06:26,300
So that means all that's really left is these NVIDIA ConnectX 6 cards.

89
00:06:26,300 --> 00:06:29,340
Now, 100 gig networking might seem a bit overkill,

90
00:06:29,340 --> 00:06:35,020
but because our setup uses high speed drives in four servers and we want to be able

91
00:06:35,020 --> 00:06:39,340
to withstand two server failures, anytime we're writing data,

92
00:06:39,340 --> 00:06:42,580
it has to be simultaneously written to the drives

93
00:06:42,580 --> 00:06:48,060
on at least three machines. That ensures we have three up-to-date copies

94
00:06:48,060 --> 00:06:51,660
in the event of an unexpected failure. Now, if you were doing this at home,

95
00:06:51,660 --> 00:06:56,460
you obviously wouldn't want to spend this kind of money, but the good news is that you can do this

96
00:06:56,460 --> 00:07:01,180
with as few as two machines. And if you're not trying to run a high speed

97
00:07:01,180 --> 00:07:07,740
caching server for 100 people, 10 or 25 gig cards are available for a fraction of the price

98
00:07:07,740 --> 00:07:12,460
and you can connect them directly to each other without an expensive switch.

99
00:07:12,460 --> 00:07:15,700
I mean, even one gig could work for light applications

100
00:07:15,700 --> 00:07:19,300
like ensuring that your home automation system never goes down.

101
00:07:19,300 --> 00:07:23,300
Enough chitchat though. Let's get on with the demo and show you what happens

102
00:07:23,300 --> 00:07:27,140
if one of these things goes to heaven in a live environment.

103
00:07:27,140 --> 00:07:31,460
But not before we get them in the rack and set up, specifically here in the lab server room,

104
00:07:31,460 --> 00:07:36,020
because if you didn't notice earlier, the studio server room is kind of running out of space,

105
00:07:36,020 --> 00:07:40,100
at least until these machines are up and running and we can take the machine they're replacing out.

106
00:07:40,100 --> 00:07:44,460
Let's go grab the servers. Unfortunately, the rest of the machines are now magically built off of camera

107
00:07:44,460 --> 00:07:47,900
and we can just slide them in. What the hell is going on?

108
00:07:47,900 --> 00:07:52,260
Oh, there we go. Beautiful. These Gigabyte chassis come with nice tool-less rails.

109
00:07:52,260 --> 00:07:56,500
So installing these in our nice ginormous Hammond rack

110
00:07:56,500 --> 00:08:00,380
should be pretty easy. Yeah, look at that. Ooh, it's getting close.

111
00:08:00,380 --> 00:08:04,620
I can taste it. We just need networking. Like we mentioned before, a hundred gig,

112
00:08:04,620 --> 00:08:07,780
but what we didn't mention before is that each is getting two of them,

113
00:08:07,780 --> 00:08:10,900
specifically one to each of the network switches

114
00:08:10,900 --> 00:08:15,700
in the rack, that way if one of those switches has a problem, the servers will stay up

115
00:08:15,700 --> 00:08:20,260
and we even get an added bonus because it's some fancy Dell magic called VLT.

116
00:08:20,260 --> 00:08:25,020
We get the throughput of both of these cables. So 200 gig to each server.

117
00:08:25,020 --> 00:08:29,220
Pretty sick. All that's left then is power

118
00:08:29,220 --> 00:08:33,020
and like any other good server, IPMI, which is a management interface

119
00:08:33,020 --> 00:08:37,780
and allows us to control the machines. Even if they're not working, they have like a hardware problem.

120
00:08:37,780 --> 00:08:42,100
We can still access them. We can turn them on, turn them off. It's kind of magic.

121
00:08:42,100 --> 00:08:45,140
If you have a server that doesn't have IPMI, I don't know.

122
00:08:45,140 --> 00:08:50,820
I don't even know if that's a server really. There are two main elements to making this setup work.

123
00:08:50,820 --> 00:08:53,860
Clustering the hypervisor, which controls our virtual machines

124
00:08:53,860 --> 00:08:57,860
and clustering the storage, which you can skip if you have existing network storage

125
00:08:57,860 --> 00:09:00,940
you wanna use instead. If you're not interested in how to set this up,

126
00:09:00,940 --> 00:09:04,340
you can skip ahead to here to see what it's like when it's up and running.

127
00:09:04,340 --> 00:09:08,220
This isn't gonna be a perfect step-by-step guide, but with the documentation you could find

128
00:09:08,220 --> 00:09:11,980
down in the description, you should be able to replicate this setup pretty easily.

129
00:09:11,980 --> 00:09:15,500
Starting with networking, we added both of our 100 gate ports to a bond,

130
00:09:15,500 --> 00:09:19,420
created a bridge, and then added a VLAN for three different networks.

131
00:09:19,420 --> 00:09:22,620
One for our VMs to use, one for cluster communication,

132
00:09:22,620 --> 00:09:27,340
and one for the storage. They can all technically run on the same network,

133
00:09:27,340 --> 00:09:31,540
but the cluster needs low latency and the storage ideally uses jumbo frames.

134
00:09:31,540 --> 00:09:37,020
So splitting it up like this is best practice. You'll also need to add each node's cluster network IP address

135
00:09:37,020 --> 00:09:40,320
to the host file on each node. With the networking up and running,

136
00:09:40,320 --> 00:09:44,040
enabled in no subscription repo and disable the enterprise repo,

137
00:09:44,040 --> 00:09:47,440
it's not recommended by the Proxmox team for production.

138
00:09:47,440 --> 00:09:50,600
They want you to pay for the enterprise repo, which is a bit more stable,

139
00:09:50,600 --> 00:09:55,000
but the free one is totally fine for a home setup. Run any pending updates before proceeding,

140
00:09:55,000 --> 00:09:59,460
then make sure you have a reliable and ideally local time server configured

141
00:09:59,460 --> 00:10:03,560
on each of your individual servers as the clustering software wants the time

142
00:10:03,560 --> 00:10:06,760
very closely in sync to stay happy. With that out of the way,

143
00:10:06,760 --> 00:10:11,640
we can set up our cluster, which handles syncing the configuration and management of any virtual machines

144
00:10:11,640 --> 00:10:15,920
between our physical machines, and it also orchestrates migrating

145
00:10:15,920 --> 00:10:21,700
or restoring them when a machine goes down. Creating the cluster just takes actually a few clicks,

146
00:10:21,700 --> 00:10:25,100
but you might want to consider the size of your setup before you continue.

147
00:10:25,100 --> 00:10:29,560
That's because in order to make sure everything stays in sync in case of an issue with a machine,

148
00:10:29,560 --> 00:10:35,100
you need the majority of servers online and available to be able to say, hey, I see that one's offline,

149
00:10:35,100 --> 00:10:38,840
but we're still good. They call this quorum.

150
00:10:38,840 --> 00:10:42,680
If you have an even number of machines, let's say four, like we do,

151
00:10:42,680 --> 00:10:46,820
and each server gets the default single say or vote,

152
00:10:46,820 --> 00:10:52,960
the minimum possible majority is then three servers. So that means we can only withstand one going down,

153
00:10:52,960 --> 00:10:56,080
which is the same amount of redundancy you'd get if you had three machines,

154
00:10:56,080 --> 00:10:59,920
because you can only lose one to have two. If you only have two computers,

155
00:10:59,920 --> 00:11:02,920
then you only ever have a majority when both are online,

156
00:11:02,920 --> 00:11:05,920
which obviously doesn't work, that's not safe.

157
00:11:05,960 --> 00:11:09,080
But you can screw it around this by adding a third machine,

158
00:11:09,080 --> 00:11:12,340
like say a Raspberry Pi to be a tiebreaker,

159
00:11:12,340 --> 00:11:16,120
but that's kind of beyond the scope of this video. Once you're ready, select the cluster network

160
00:11:16,120 --> 00:11:19,680
in the creation menu, and then join the other machines to the cluster.

161
00:11:19,680 --> 00:11:23,300
Once they're in, you should be able to see them in the web GUI of any of the machines.

162
00:11:23,300 --> 00:11:28,840
Now on to clustering our storage. By default, Proxmox is very heavily integrated with Cef,

163
00:11:28,840 --> 00:11:32,800
an open source distributed storage system that's pretty easy to set up and maintain.

164
00:11:32,800 --> 00:11:37,800
With that in mind, newbies should start with Cef, and you can follow the great tutorial on their wiki,

165
00:11:37,800 --> 00:11:43,600
but it isn't the most performant in a small cluster like this. So we're gonna be using something called LinStore with DRBD,

166
00:11:43,600 --> 00:11:48,000
or Distributed Replicated Block Devices, another open source storage system.

167
00:11:48,000 --> 00:11:53,200
It requires a bit more manual configuration, but they do have a purpose-built tutorial for Proxmox

168
00:11:53,200 --> 00:11:56,880
and host the files for free with an optional paid enterprise version

169
00:11:56,880 --> 00:12:02,080
that operates on a similar model as Proxmox itself. Unlike Cef, it doesn't handle its own storage devices,

170
00:12:02,080 --> 00:12:05,640
so we mirrored our two Keoxia SSDs with ZFs first,

171
00:12:05,640 --> 00:12:08,920
and then pointed LinStore to that. Once it's installed and configured,

172
00:12:08,920 --> 00:12:12,880
then you can add the clustered storage to Proxmox, create a virtual machine with that storage,

173
00:12:12,880 --> 00:12:17,240
and it'll automatically be replicated in real time to the number of other nodes you specify.

174
00:12:17,240 --> 00:12:21,480
And if you happen to migrate a VM to a server that doesn't have a copy on it,

175
00:12:21,480 --> 00:12:27,520
it'll automatically stream the data over the network from one of those nodes in what they call diskless mode.

176
00:12:27,520 --> 00:12:30,280
But let's just try it. Hey.

177
00:12:31,520 --> 00:12:35,040
Pretty nice, right? Looking good. It's like even cable-managed.

178
00:12:35,040 --> 00:12:38,560
I know, right? So 200 gig on each of them? Nice.

179
00:12:38,560 --> 00:12:43,000
Who are you people, and what have you done with our infrateam? I made one small adjustment just for you.

180
00:12:43,000 --> 00:12:46,320
Look at the drives. They're in the same spot. No, they're not.

181
00:12:46,320 --> 00:12:50,080
The top one's different. I hate you so much. Why would you do that?

182
00:12:50,080 --> 00:12:53,400
But more importantly, does it work? Yeah, obviously.

183
00:12:53,400 --> 00:12:56,880
Okay, well, here's your Windows desktop. Obviously, he says. Well, what?

184
00:12:56,880 --> 00:13:00,640
Editor, a super cut of things not working here, please.

185
00:13:00,640 --> 00:13:03,960
Jake, we have a leak. Oh, God. One failure.

186
00:13:03,960 --> 00:13:07,440
You just downgraded my Wi-Fi. Four drives aren't working?

187
00:13:07,440 --> 00:13:11,320
Did you actually break it? Anyways, you see our Windows, right?

188
00:13:11,320 --> 00:13:14,800
Yeah. Our Windows is running right now on number four,

189
00:13:14,800 --> 00:13:21,560
which is the bottom server. Yes. Now, obviously, remoting into the machine over Wi-Fi.

190
00:13:21,560 --> 00:13:25,840
Okay, the video playback's a little bit choppy. That's not gonna affect the type of workload

191
00:13:25,840 --> 00:13:31,680
you would normally be running on something like this, like a DNS server, or like are we finally

192
00:13:31,680 --> 00:13:34,720
doing Active Directory? We will, not today.

193
00:13:34,720 --> 00:13:39,120
Not today, but we can now. But this is the kind of setup that you want

194
00:13:39,120 --> 00:13:42,560
for something like AD. Live playing the video, let's migrate to number one,

195
00:13:42,560 --> 00:13:46,360
which is the top one. The process will be a little bit faster,

196
00:13:46,360 --> 00:13:50,580
but basically what it's doing is copying the memory,

197
00:13:50,580 --> 00:13:54,600
like the RAM, what's actually in memory. And then once it's done most of it,

198
00:13:54,600 --> 00:14:00,320
it pauses the operating system for a split second, copies the last tiny little bit, and boom.

199
00:14:00,320 --> 00:14:03,400
That is so cool. You're exactly where you were before

200
00:14:03,400 --> 00:14:06,920
because the storage is already there. Right.

201
00:14:06,920 --> 00:14:10,400
So in terms of actual downtime, like interruption to that experience.

202
00:14:10,400 --> 00:14:13,480
17 seconds. No, 270 milliseconds.

203
00:14:13,480 --> 00:14:16,840
Oh, I thought you were pointing at the other thing. No, 17 seconds is that whole process.

204
00:14:16,840 --> 00:14:20,200
Oh yeah, yeah, well that's kind of downtime, I guess.

205
00:14:20,200 --> 00:14:23,640
No, because if there was somebody using this like as a virtual desktop, for instance.

206
00:14:24,640 --> 00:14:27,720
They would see like a quarter of a second blink,

207
00:14:27,720 --> 00:14:33,680
and otherwise like nothing changed. I wanted to show a more realistic to us demo.

208
00:14:33,680 --> 00:14:36,840
Sure. Come hither. Here's a Plex server.

209
00:14:36,840 --> 00:14:40,080
We've got some videos on it, and this is on server number one.

210
00:14:40,080 --> 00:14:45,760
Okay. Let's play a video. Now we go and move our Plex server to a different machine.

211
00:14:45,760 --> 00:14:49,640
So it's copying the RAM at 2.5 gigabytes a second.

212
00:14:49,640 --> 00:14:53,400
So that's like 2.8 gigabytes a second, that's pretty good. We haven't done any actual,

213
00:14:53,400 --> 00:14:57,760
oh, it's already done. And no interruption, because video playback,

214
00:14:57,760 --> 00:15:02,840
like many other applications, uses buffers to hide small interruptions in the service.

215
00:15:02,840 --> 00:15:06,840
In this case, downloading the video in small chunks a little bit at a time.

216
00:15:06,840 --> 00:15:13,800
Yeah, roughly 10 second chunks it looks like here, which is plenty to cover that 146 milliseconds of downtime.

217
00:15:13,800 --> 00:15:16,840
Wow. You want to try steam download with Lancash?

218
00:15:16,840 --> 00:15:21,120
I mean, we should? Yeah, why not? Yep, we're CPU bottlenecks for sure.

219
00:15:21,160 --> 00:15:25,360
Using 80 to 90% of a 24-core threadripper.

220
00:15:25,360 --> 00:15:28,480
But I realized I made a little bit of an oopsie here. Like you can see the CPU usage,

221
00:15:28,480 --> 00:15:32,720
we're using 4% of our eight CPUs that I assigned to this steam cache.

222
00:15:32,720 --> 00:15:35,800
We can see our network traffic's going up. Sick.

223
00:15:35,800 --> 00:15:38,920
Except I made this as a container, not a VM.

224
00:15:38,920 --> 00:15:42,840
And the thing with containers, they're great. They're a little bit lighter weight, better performance,

225
00:15:42,840 --> 00:15:46,800
but they run within the kernel of the main system.

226
00:15:46,800 --> 00:15:51,720
It'll shut down that container and then just reboot on the other machine. Right, which means it's fine,

227
00:15:51,720 --> 00:15:58,200
but there'll be a longer downtime delay. But way less than, hey, is that thing working?

228
00:15:58,200 --> 00:16:01,840
Oh, I think the internet's not working. Somebody should go look at that. Yeah.

229
00:16:01,840 --> 00:16:05,440
Trying to figure out what's going on, fixing the machine, getting the machine back going. So cool.

230
00:16:05,440 --> 00:16:10,120
You're talking about the matter of a couple minutes maybe. Yeah. Now for the most impressive demo yet,

231
00:16:10,120 --> 00:16:14,000
the unexpected migration. Which one am I yanking?

232
00:16:14,000 --> 00:16:18,080
Okay, so number one has three VMs on it. They're all in the high availability.

233
00:16:18,080 --> 00:16:21,280
Jake's chain. Ah. What?

234
00:16:21,280 --> 00:16:24,320
It means teasing. Oh, I get it. Okay, sorry.

235
00:16:24,320 --> 00:16:28,200
Which one? I wasn't even listening to you. Number one. Number one.

236
00:16:28,200 --> 00:16:33,080
And we'll see how fast it does. We're looking at server one from server two.

237
00:16:33,080 --> 00:16:34,920
So go for it. Ah.

238
00:16:40,640 --> 00:16:45,400
From my understanding, this process takes a minute or two.

239
00:16:45,400 --> 00:16:48,600
Okay. Let's go, let's already detected the notice offline.

240
00:16:48,600 --> 00:16:54,400
Sure is. If you're doing scheduled maintenance, you can actually go and just shut off a machine.

241
00:16:54,400 --> 00:16:58,400
And then it will just be like, oh crap, I need to move all those things before I shut off.

242
00:16:58,400 --> 00:17:02,120
Which is a little bit nicer. In this case, it has to be like, sure,

243
00:17:02,120 --> 00:17:07,080
the server is down. So all three of those are yelling at what was this.

244
00:17:07,080 --> 00:17:11,120
Say, hello? What happened? Are you alive? What's going on?

245
00:17:11,120 --> 00:17:14,680
I can hear them. Hello? What happened? Are you alive?

246
00:17:14,680 --> 00:17:21,360
What's going on? Oh, it did something. So in theory, it should distribute them evenly

247
00:17:21,360 --> 00:17:24,600
because that's the option that's set right now. Right.

248
00:17:24,600 --> 00:17:29,760
In terms of its workload, you mean? Yeah. There is also a mode that does like resource checking.

249
00:17:29,760 --> 00:17:34,440
Sure. But right now it's going, how many VMs are in each one and just like filling that number so it's even.

250
00:17:34,440 --> 00:17:38,080
That is so cool. Okay, so what service was running on that one?

251
00:17:38,080 --> 00:17:42,400
Was that the steam cache? So we should go download a game. You could go do Plex right now too.

252
00:17:42,400 --> 00:17:50,240
Let's go do it. Let's go do it. Come on, let's go. And no movie magic, but also magic, virtualization magic.

253
00:17:51,440 --> 00:17:55,160
This is flipping awesome. And it's going to be an absolute game changer

254
00:17:55,160 --> 00:17:58,520
for the way that we manage our infrastructure. And like I said at the beginning,

255
00:17:58,520 --> 00:18:04,340
I think the coolest thing about it is that this type of architecture doesn't even have to run

256
00:18:04,340 --> 00:18:08,680
on the kind of Emerald Rapids latest server technology

257
00:18:08,680 --> 00:18:11,880
that Intel and Gigabyte and Micron and NVIDIA

258
00:18:11,880 --> 00:18:15,960
all sent over here. So the takeaway for you guys is whether it's for work

259
00:18:15,960 --> 00:18:19,920
or whether it's just for your home automation or your Plex server at home,

260
00:18:19,920 --> 00:18:25,200
something like this is absolutely attainable with potentially very little financial outlay.

261
00:18:25,200 --> 00:18:28,440
Go buy some used like eighth gen Intel core processors.

262
00:18:28,440 --> 00:18:32,520
Those are pretty cheap. Some cheap DDR4 and you're off to the races.

263
00:18:32,520 --> 00:18:36,120
Or if you're doing this more properly for your business, check out Intel Emerald Rapids

264
00:18:36,120 --> 00:18:40,520
and their whole line of Xeon and GPU products down below.

265
00:18:41,600 --> 00:18:44,760
Where were you pointing? Down below, that's the description.

266
00:18:46,480 --> 00:18:48,480
Get your mind out of the description.
