1
00:00:00,880 --> 00:00:09,519
165,000 CPU cores, 20 million of GPU,

2
00:00:06,480 --> 00:00:12,400
and a cool pabyte of RAM. I wouldn't

3
00:00:09,519 --> 00:00:18,160
normally describe myself as a fur bree, but the new fur supercomput has

4
00:00:15,280 --> 00:00:22,720
definitely awakened some feelings that I can't say I've ever felt before.

5
00:00:19,920 --> 00:00:29,359
Feelings like wanting to go deep inside it, to gently remove its panels, and

6
00:00:25,920 --> 00:00:31,039
maybe some light screwing. And thanks to

7
00:00:29,359 --> 00:00:34,480
our friends here at Simon Fraser University in beautiful British

8
00:00:32,640 --> 00:00:40,399
Columbia, we are going to be doing just that. Going deep under the hood of the

9
00:00:37,120 --> 00:00:42,640
CPU and GPU compute deployment that is

10
00:00:40,399 --> 00:00:47,440
going to be serving tens of thousands of scientists and researchers in fields all

11
00:00:44,879 --> 00:00:52,559
the way from AI to zoology all over the country for years to come. This will be

12
00:00:49,760 --> 00:00:57,039
our first upclose look at a realworld deployment that uses direct die liquid

13
00:00:55,039 --> 00:01:03,199
cooling to increase cooling efficiency from about 30% to over 90%. Or at least

14
00:01:01,120 --> 00:01:09,200
it'll be the first data center grade deployment. Mine doesn't count and it

15
00:01:06,240 --> 00:01:16,880
doesn't look nearly as sexy. But what is sexy is this segue to our sponsor MSI.

16
00:01:14,000 --> 00:01:20,799
MSI's MAG B850 Tomahawk Max Wi-Fi motherboard has bells, whistles, and a

17
00:01:19,280 --> 00:01:25,200
bunch of other things that make noise. With support for the latest AMD CPUs,

18
00:01:22,960 --> 00:01:31,799
DDR5, and a slew of unique features, it's a performance monster. Check it out

19
00:01:27,280 --> 00:01:31,799
today using our link down below.

20
00:01:38,960 --> 00:01:47,280
In the row behind me is 640 NVIDIA H180

21
00:01:43,759 --> 00:01:49,340
GB GPUs, each with an estimated cost of

22
00:01:47,280 --> 00:01:50,640
around $31,000.

23
00:01:50,640 --> 00:01:58,719
Even at less than half of the maximum density, just 20 nodes per rack, the

24
00:01:56,399 --> 00:02:02,399
team here had to reroute power from elsewhere in the building and

25
00:02:00,640 --> 00:02:07,600
significantly upgrade the building's cooling system just to accommodate the

26
00:02:05,040 --> 00:02:11,920
incredible power requirements of these NVIDIA hoppers. This is actually a

27
00:02:10,319 --> 00:02:16,400
common theme that I hear from basically anyone in the data center space. I mean,

28
00:02:14,239 --> 00:02:20,879
we tried to build for the future, but we couldn't have possibly seen this coming.

29
00:02:19,040 --> 00:02:25,920
And there's no sign of things slowed down. We'll get to that later, though.

30
00:02:23,680 --> 00:02:30,319
First, the most exciting part of the tour. They pulled one of their spares

31
00:02:27,920 --> 00:02:38,160
out of the rack for us to crack open and get up close and personal with. And oh

32
00:02:33,760 --> 00:02:41,040
my god, look at this thing.

33
00:02:38,160 --> 00:02:47,360
It's heavy. I guess when you got this much copper in you, like, wow.

34
00:02:44,879 --> 00:02:52,000
It's kind of scary handling it. I mean, this one you node alone is worth more

35
00:02:49,680 --> 00:02:56,239
than my car. And a rack of these is worth more than my house.

36
00:02:54,560 --> 00:03:02,000
It's a little sketchy, but I want you guys to be able to see it. The CPUs are

37
00:02:58,879 --> 00:03:05,360
epic genoa. So, last generation Zen

38
00:03:02,000 --> 00:03:09,120
4-based, but Genova still supports up to

39
00:03:05,360 --> 00:03:12,080
12 channel DDR5 memory and 128 lanes of

40
00:03:09,120 --> 00:03:18,319
PCIe Gen 5, which is plenty to keep these GPU cores fed. if more CPU compute

41
00:03:15,680 --> 00:03:25,120
is needed. Clearly, there is support for dual CPU sockets, but the team at SFU

42
00:03:21,519 --> 00:03:27,200
found that eight CPU cores per GPU was

43
00:03:25,120 --> 00:03:35,599
plenty for their purposes, and they opted for a single 48 core CPU and 1.152

44
00:03:31,920 --> 00:03:37,519
TB of RAM in each of their nodes. Now,

45
00:03:35,599 --> 00:03:42,159
for a closer look at the GPUs. Unfortunately, I'm not allowed to take

46
00:03:39,519 --> 00:03:49,680
the coolers off them. But under each of these four cold plates is an NVIDIA H100

47
00:03:45,680 --> 00:03:54,560
SXM5 80 gig GPU, giving us a total of

48
00:03:49,680 --> 00:03:57,519
320 GB of VRAM per node. And guys,

49
00:03:54,560 --> 00:04:05,760
that's not just any VRAM. That is HBM3 running on a 5,120 bit bus for a total

50
00:04:01,280 --> 00:04:08,640
bandwidth per GPU of 3.36 terabytes per

51
00:04:05,760 --> 00:04:15,280
second. For context, a top-of-the-line consumer card, the RTX 5090, achieves

52
00:04:11,760 --> 00:04:17,519
just over half of that bandwidth. This

53
00:04:15,280 --> 00:04:24,880
kind of power does come with drawbacks, however, like for example, heat. Each of

54
00:04:20,959 --> 00:04:27,040
these is rated for 700 W of power

55
00:04:24,880 --> 00:04:30,720
consumption through the SXM socket that's underneath them. And that is

56
00:04:28,560 --> 00:04:35,759
where the incredible cooling solution in this Lenovo Node comes in. As a liquid

57
00:04:33,280 --> 00:04:41,120
cooling nerd, I got to say, guys, this is the coolest part for me. I mean, did

58
00:04:38,479 --> 00:04:49,199
you notice that there isn't a single fan in sight anywhere in this machine? That

59
00:04:44,240 --> 00:04:52,720
is because everything, CPUs, GPUs, VRM,

60
00:04:49,199 --> 00:04:56,560
network interface, SSD caddy, even the

61
00:04:52,720 --> 00:04:58,400
system memory is directly liquid cooled.

62
00:04:56,560 --> 00:05:03,120
All of it. This feels a little bit like doing the maze and highlights magazine.

63
00:05:00,800 --> 00:05:07,520
So, here's our inlet over here, which splits into two main loops that go

64
00:05:05,360 --> 00:05:11,039
through the system. The primary loop, which we can tell because it has a

65
00:05:09,039 --> 00:05:16,080
thicker pipe coming off of it, goes straight to the middle of our four GPUs,

66
00:05:13,520 --> 00:05:21,919
where this manifold splits fresh incoming water out to our four GPUs. Two

67
00:05:19,680 --> 00:05:26,720
of them just spit right back into the outlet here, while the other two run up

68
00:05:24,479 --> 00:05:32,240
to this networking board and then consolidate back to the outlet. That's

69
00:05:29,520 --> 00:05:38,240
our primary loop. Our secondary loop comes through here handling some of the

70
00:05:34,320 --> 00:05:42,320
power delivery and then carries over to

71
00:05:38,240 --> 00:05:44,080
interesting. It splits out doing the RAM

72
00:05:42,320 --> 00:05:49,039
next. I am not 100% sure what to make of that

73
00:05:46,880 --> 00:05:53,759
because I would think RAM would be a tertiary priority in terms of cooling,

74
00:05:51,600 --> 00:05:58,080
but that's what they've done. We go through the RAM splitting into three

75
00:05:56,160 --> 00:06:03,919
different tubes that sit between our DIMs down both rows. Then one side

76
00:06:01,520 --> 00:06:09,280
handles this network caddy here and the other side handles our SSD caddy. Then

77
00:06:07,039 --> 00:06:15,520
each of those come back to one of the CPUs which come out into the middle here

78
00:06:12,560 --> 00:06:19,600
and then run back to the outlet here. Not maybe the way I would have laid it

79
00:06:17,280 --> 00:06:23,840
out. There's a lot of 90° turns in here, meaning a lot of restriction. But I'm

80
00:06:22,160 --> 00:06:27,600
sure the engineers at Lenovo know what they're doing. There's a ton of other

81
00:06:25,680 --> 00:06:31,840
cool stuff to unpack here, too. You probably noticed there's no power

82
00:06:28,880 --> 00:06:35,840
supply. That's because it uses these chunky connectors here at the back to

83
00:06:34,080 --> 00:06:40,240
plug into a back plane in the back of the rack. As for the coolant

84
00:06:37,919 --> 00:06:45,440
connections, well, according to the manufacturer, these do have a little bit

85
00:06:42,720 --> 00:06:52,000
of natural leakage, but uh it's on the order of molecules, which is pretty damn

86
00:06:48,880 --> 00:06:54,080
impressive. There are sensors all over

87
00:06:52,000 --> 00:06:58,479
the motherboard to detect any kind of leakage. Now, the team here wasn't sure

88
00:06:56,400 --> 00:07:02,240
about the exact chemistry of the coolant they're using, but they did tell me that

89
00:07:00,240 --> 00:07:06,400
it has antimicrobial properties to prevent anything from growing in the

90
00:07:03,759 --> 00:07:10,639
loop. There's some other fun stuff. There's a little stylus in here.

91
00:07:08,479 --> 00:07:14,639
Apparently, this is meant to uh assist in removing memory, which is great. I'd

92
00:07:13,440 --> 00:07:20,960
actually love to see more gaming motherboards come with that.

93
00:07:17,120 --> 00:07:23,120
I thought this lone 7.68 TB NVMe drive

94
00:07:20,960 --> 00:07:30,960
was interesting, too. I mean, the networking is 400 GB per second* 2 to

95
00:07:27,759 --> 00:07:33,440
the 2 pabytes of NVMe storage, not to

96
00:07:30,960 --> 00:07:38,000
mention 49 pabytes of spinning Rust that's right over there. But according

97
00:07:35,759 --> 00:07:42,639
to the team here, occasionally they need node local storage to improve GPU

98
00:07:40,560 --> 00:07:46,880
performance a little bit. So, you'd never boot off of this or anything, but

99
00:07:44,639 --> 00:07:52,000
it's nice to have there as a scratch. Also, the button cell in here is mounted

100
00:07:49,520 --> 00:07:56,800
in a vertical caddy because the density is so high in this one you node that

101
00:07:54,879 --> 00:08:01,360
they just couldn't give up the space that it would have taken to mount it

102
00:07:58,240 --> 00:08:03,919
parallel to the board. I also spotted a

103
00:08:01,360 --> 00:08:08,000
micro SD header. If anyone out there works in the data

104
00:08:05,680 --> 00:08:12,400
center and knows what that's for, I haven't seen it before. And Jim and I

105
00:08:09,919 --> 00:08:17,520
just assumed that I typed. Oh, there was one other thing we wanted

106
00:08:13,919 --> 00:08:19,199
to look at. These big power bad boys. We

107
00:08:17,520 --> 00:08:24,560
couldn't see them until we got that shroud off. So these they're just bus

108
00:08:22,400 --> 00:08:29,120
bars. They're going from power supply here, which is a DC toDC power supply,

109
00:08:27,039 --> 00:08:34,800
>> 48 volt it looks like, and they're going over to our GPUs. Damn. What's

110
00:08:32,560 --> 00:08:39,839
interesting to me that I just noticed is that there's a clear delineation between

111
00:08:37,120 --> 00:08:44,320
the NVIDIA engineered parts of this with the black PCB and they're completely

112
00:08:42,399 --> 00:08:48,240
separate from everything else and the Lenovo engineered parts of this. So

113
00:08:46,399 --> 00:08:53,279
Lenovo is acting like more of a system integrator around this compute block

114
00:08:51,440 --> 00:08:58,880
here. Like you can even see the silk screening on the PCB is distinctly

115
00:08:56,160 --> 00:09:05,200
NVIDIA and Lenovo's just doing their DC toDC power. So, it's just power in here

116
00:09:02,000 --> 00:09:08,000
and then PCIe in here in the form of

117
00:09:05,200 --> 00:09:13,920
these four MCIO connectors right here. This is essentially like plugging a GPU

118
00:09:10,720 --> 00:09:15,200
into your Legion gaming PC.

119
00:09:13,920 --> 00:09:20,720
>> The the GPU house >> with extra steps. >> Yeah.

120
00:09:17,519 --> 00:09:23,200
>> Before we poke around in one of the 192

121
00:09:20,720 --> 00:09:28,640
core CPU nodes that they've got, let's take a look at one of the racks that

122
00:09:24,880 --> 00:09:31,200
these boys slide into.

123
00:09:28,640 --> 00:09:35,760
They're still using a very similar rear door chilled liquid rack like we saw

124
00:09:33,839 --> 00:09:44,399
with their air cooled modes when we did a tour oh lordy 8 years ago. Um

125
00:09:40,800 --> 00:09:46,720
anywh who the point is that chilled 16

126
00:09:44,399 --> 00:09:52,800
1/2° cooling comes from the evaporative cooling towers outside. Then hot air

127
00:09:50,160 --> 00:09:57,600
from the power supplies and any of the networking equipment that's in the rack

128
00:09:54,880 --> 00:10:05,040
runs through here and wow is that ever hot. Then it spits out nice comfortable

129
00:10:02,240 --> 00:10:10,560
room temperature air on the other side. Each of these racks is fed by dual

130
00:10:07,760 --> 00:10:18,399
three-phase 60 amp feeds for a total of about 70,000 watts per rack. Now, if SFU

131
00:10:15,200 --> 00:10:21,360
had the power and cooling in this 1960s

132
00:10:18,399 --> 00:10:26,240
bunker, they could juice these up to 180,000 watts per rack. But they don't.

133
00:10:23,920 --> 00:10:32,640
Hence the empty rack space. Since we have this open, oh wow, that is a big

134
00:10:30,720 --> 00:10:37,680
difference between the cold side and the hot side going into the the back of

135
00:10:35,200 --> 00:10:42,240
these back planes for the servers. I don't have to ask which one's the

136
00:10:39,120 --> 00:10:43,760
supply. That's the cold side, which

137
00:10:42,240 --> 00:10:48,000
since we're on the subject, this is a perfect time to look at the cooling

138
00:10:45,519 --> 00:10:54,160
distribution system. This is the Liber XPU from Vertip. It can do 600,000

139
00:10:51,839 --> 00:10:59,360
watts of cooling capacity per one of these cooling distribution units or

140
00:10:56,320 --> 00:11:03,120
CDUs. Water comes in the supply side

141
00:10:59,360 --> 00:11:05,040
here. This thick boy, she's chilly.

142
00:11:03,120 --> 00:11:10,959
That's coming from the cooling towers outside. Then that runs all the way down

143
00:11:08,800 --> 00:11:15,200
to the bottom here to the heat exchanger in the front. This liquid toliquid heat

144
00:11:13,200 --> 00:11:20,079
exchanger does exactly what it says on the tin. Taking that cold water from the

145
00:11:17,440 --> 00:11:24,800
primary loop that goes outside and using it to chill the warm water that is

146
00:11:22,399 --> 00:11:30,959
coming directly off of the blocks that are going to our nose. This unit uses

147
00:11:27,600 --> 00:11:33,600
dual redundant pumps. And if we go back

148
00:11:30,959 --> 00:11:38,560
around to the back, uses these manifolds and valves to control flow to up to six

149
00:11:36,640 --> 00:11:43,680
different racks. And it's very easy to tell which is the cold side that's being

150
00:11:40,880 --> 00:11:48,720
chilled in, which is the hot side here. Wow.

151
00:11:45,920 --> 00:11:51,760
I want one. Von, can I have one? Probably the coolest part is this little

152
00:11:50,240 --> 00:11:55,600
touchcreen display on the front that much more succinctly illustrates what I

153
00:11:53,680 --> 00:11:59,040
just said. Here's your primary loop. Here's your secondary loop. Here's all

154
00:11:57,360 --> 00:12:04,640
your flow rates, all your temperatures, and here's an alarm that they assure me

155
00:12:01,040 --> 00:12:06,480
is totally fine.

156
00:12:04,640 --> 00:12:11,120
This data is super important because if they accidentally had water that was too

157
00:12:08,800 --> 00:12:14,240
cool going into the servers behind me, then they could end up with

158
00:12:12,000 --> 00:12:18,880
condensation, which hopefully I don't have to explain why that's super super

159
00:12:16,320 --> 00:12:22,639
bad. Everything's hooked up using AquaM tubing from Germany. The admin here

160
00:12:21,040 --> 00:12:27,120
spoke with some other facilities that used stainless steel and one of them got

161
00:12:24,880 --> 00:12:30,720
rust in their cooling system. is a big big mess. They've been really really

162
00:12:28,959 --> 00:12:34,560
happy with their offerings. Now, let's go check out a CPU mode. Contrary to

163
00:12:33,120 --> 00:12:39,200
what NVIDIA would like everyone to believe, not everything runs best on a

164
00:12:36,720 --> 00:12:45,920
GPU, even today. And that's where these come in. Each of these one U racks

165
00:12:42,720 --> 00:12:51,680
contains two nodes. And each node

166
00:12:45,920 --> 00:12:55,440
contains 192 Zen 5 epic Turing cores

167
00:12:51,680 --> 00:12:59,519
with 768 gigs of memory. So that's a

168
00:12:55,440 --> 00:13:04,160
total of nearly 400 cores in each of

169
00:12:59,519 --> 00:13:06,399
these one use. Holy freaking

170
00:13:04,160 --> 00:13:12,160
for networking. They actually don't go as heavy on these using 200 gig

171
00:13:08,880 --> 00:13:15,120
connections and NDR to dynamically share

172
00:13:12,160 --> 00:13:19,680
that 200 Gbit link between the two nodes depending on their needs. This approach

173
00:13:17,760 --> 00:13:23,839
does have the drawback of meaning that if the primary node goes down, we lose

174
00:13:22,160 --> 00:13:28,720
network connection to the secondary one. But I have to assume that the cost

175
00:13:25,760 --> 00:13:34,240
savings outweigh the disadvantages in this case. In terms of loop layout, this

176
00:13:31,760 --> 00:13:40,160
one is much simpler coming in to both sides and then out of both sides. But

177
00:13:37,600 --> 00:13:44,399
just like the GPU nodes, the goal here is to get a water tube up against pretty

178
00:13:42,959 --> 00:13:48,639
much anything in the server that generates heat because there are no fans

179
00:13:47,040 --> 00:13:52,639
whatsoever. One cool thing we missed on the GPU node

180
00:13:50,720 --> 00:13:56,399
was we never got to look under the little cooling plates that the SSDs and

181
00:13:54,639 --> 00:14:01,519
network cards sit on. So, here's what it looks like. It pretty much looks like a

182
00:13:59,120 --> 00:14:05,519
heat pipe except instead of being full of what is usually a vapor and sometimes

183
00:14:03,920 --> 00:14:10,160
a liquid that circulates just within itself, it's just full of water or other

184
00:14:08,320 --> 00:14:15,279
coolant that is circulating to an external system. Now, let's take a look

185
00:14:12,639 --> 00:14:20,959
at the racks that these live in. Each of these racks contains 72 of the nodes

186
00:14:18,959 --> 00:14:26,120
that I just showed you guys, top to freaking bottom, with roughly 13,824

187
00:14:26,160 --> 00:14:33,120
cores. Each three racks is an island

188
00:14:30,480 --> 00:14:39,600
with a non-blocking 800 gig connection between islands. So, 41,000 cores can

189
00:14:36,639 --> 00:14:43,760
represent a single job with no blocking. They have some other specialized nodes

190
00:14:41,920 --> 00:14:47,600
like the storage ones, including the ones on the other side of the aisle that

191
00:14:45,519 --> 00:14:51,839
hold data for our local particle collider. Try them. We've got a whole

192
00:14:49,519 --> 00:14:55,839
video about that. Along with some 8 TB RAM nodes, which are, I think, pretty

193
00:14:53,760 --> 00:14:59,839
self-explanatory. They're for jobs that would overflow on a regular node as long

194
00:14:58,320 --> 00:15:05,519
as you don't mind them having a few bugs. And finally, a single AMD MI300X

195
00:15:03,600 --> 00:15:09,199
node to I don't know what, keeping video on the

196
00:15:07,199 --> 00:15:13,680
toes or We could do it. We could buy more than

197
00:15:11,279 --> 00:15:18,160
one of these. You better not charge too much, especially when you factor in

198
00:15:15,519 --> 00:15:22,959
modern security needs. There are six zones of security to get to some of the

199
00:15:20,320 --> 00:15:26,560
cages that actually have biometric locks on them where not only do you need to

200
00:15:24,480 --> 00:15:30,800
know the pin code, but you have to put your hand under it and it will check if

201
00:15:28,880 --> 00:15:35,279
that hand is attached to a living person. There's cameras everywhere in

202
00:15:33,440 --> 00:15:39,120
the data center with full visibility in all directions. And our tour guide today

203
00:15:37,519 --> 00:15:42,560
actually said that there's someone monitoring them so often that it's

204
00:15:40,880 --> 00:15:47,680
become a bit of a game where they'll send non-flattering pictures of him

205
00:15:44,720 --> 00:15:51,920
moving around in the data center just to make sure he knows they're watching.

206
00:15:49,680 --> 00:15:56,160
Cooling everything are these evaporative cooling towers behind me. The three that

207
00:15:54,480 --> 00:15:59,360
are closest to the building, they were there the last time we were here, but I

208
00:15:58,000 --> 00:16:04,639
couldn't show them to you for reasons that involve red tape and approvals. So,

209
00:16:02,240 --> 00:16:08,480
here they are. I still can't get any closer to them for reasons that involve

210
00:16:06,160 --> 00:16:12,399
red tape and approvals, but hey, we can check out the acoustic damping that's on

211
00:16:10,560 --> 00:16:17,199
these ones on the other side. That's impressive. Here I am right at the

212
00:16:14,639 --> 00:16:22,959
intake next to these sound baffles. And for context, here's the untreated ones.

213
00:16:20,000 --> 00:16:28,320
The total cooling capacity is about 4.7 megawatt, which that is way more than

214
00:16:26,079 --> 00:16:32,399
what's needed for the machines inside. But just like power and storage, you

215
00:16:30,480 --> 00:16:36,240
want to have some extra for resiliency in the event of an equipment failure.

216
00:16:34,480 --> 00:16:39,920
Hey, what's one of these worth? Maybe I'll pick one up for the office.

217
00:16:37,759 --> 00:16:43,680
>> $1.2 million each. >> Oh,

218
00:16:41,680 --> 00:16:50,320
never mind. Frankly, I'd rather have one of these anyway. To augment the original

219
00:16:46,720 --> 00:16:52,720
pumps for Cedar, which could do 800 gall

220
00:16:50,320 --> 00:16:59,440
per minute of coolant, they added these two new ones that do 1,500

221
00:16:56,320 --> 00:17:01,759
gallons per minute. They also now have

222
00:16:59,440 --> 00:17:06,880
two mechanical chillers which can be useful during the times of year that we

223
00:17:03,519 --> 00:17:09,039
get outside temperatures above 33 C. So

224
00:17:06,880 --> 00:17:12,720
for maybe 6 hours a day, they'll switch over to mechanical chilling to help out

225
00:17:11,199 --> 00:17:18,480
their evaporative cooling tower. Probably the coolest thing about this gear though is how smart it is. They've

226
00:17:16,559 --> 00:17:22,880
got telemetry capture for things like temperature and flow rates, and it all

227
00:17:20,559 --> 00:17:25,919
feeds into a third party called Kaizen that helps with logging and determining

228
00:17:24,640 --> 00:17:31,039
if something's gone wrong with the system. Fun fact, by the way, the twoft

229
00:17:29,120 --> 00:17:36,000
thick concrete floor that I'm standing on is so burdened by all of this heavy

230
00:17:33,919 --> 00:17:42,640
equipment and coolant that it actually deflects half an inch in the center. Is

231
00:17:39,440 --> 00:17:44,960
that Is that okay? Am I going to break

232
00:17:42,640 --> 00:17:48,799
it? I mean, the place used to be the power distribution center for the

233
00:17:46,559 --> 00:17:53,840
southern half of our province, and it's built like a bunker, but that's not

234
00:17:51,039 --> 00:18:01,039
enough. Who paid for it all? Fur had a total budget of about $82 million which

235
00:17:57,120 --> 00:18:03,760
oh I assume is Canadian rubbledos. Uh so

236
00:18:01,039 --> 00:18:08,480
a little under $60 million US and that came from a combination of the digital

237
00:18:05,440 --> 00:18:10,880
research alliance of Canada, BCKDF and

238
00:18:08,480 --> 00:18:14,480
vendor inind contributions which I just learned are a vendor giving significant

239
00:18:12,960 --> 00:18:18,080
discounts. How how do I get signed up for that program? Is that only for

240
00:18:16,080 --> 00:18:21,919
educational institutions? Anyway, they did ask us to shout out a

241
00:18:20,320 --> 00:18:27,280
couple of companies who helped them out. Lenovo, DDN, and Vertive on the cooling

242
00:18:25,440 --> 00:18:30,640
side. Um, and they didn't ask us to shout these guys out, but we're going to

243
00:18:28,480 --> 00:18:34,000
do it anyway. A shout out for our sponsor, Vessie. We're partnering up

244
00:18:32,480 --> 00:18:39,120
with them to bring you an exclusive giveaway extravaganza all month long. By

245
00:18:37,440 --> 00:18:43,440
entering, you'll get a chance to win one of seven prize packs, each loaded with

246
00:18:41,120 --> 00:18:46,400
Vessie gear, LT Storm merch, and more. We're talking footwear like their

247
00:18:44,720 --> 00:18:51,120
Stormbburst hightops or bags like our very own commuter backpack, Sony XM6

248
00:18:48,799 --> 00:18:55,120
headphones, and some prizes so special we can't even tell you about them yet.

249
00:18:52,880 --> 00:18:57,679
It's 100% free to enter. But if you're looking for some stylish shoes to

250
00:18:56,400 --> 00:19:01,360
protect yourself from the rain right now, well, we've got a special discount

251
00:18:59,679 --> 00:19:05,280
for you that's valid until the end of the month, 20% off. So, head over to

252
00:19:03,600 --> 00:19:10,000
vessie.com/LMG to enter the giveaway today and get an

253
00:19:07,360 --> 00:19:14,720
instant 20% off code sent directly to your inbox. Act fast, though. Everything

254
00:19:12,000 --> 00:19:19,360
closes on October 31st. Once again, that's vessie.com/LMG.

255
00:19:17,520 --> 00:19:22,559
Take part in an exclusive giveaway and secure your 20% off code.

256
00:19:20,960 --> 00:19:26,000
>> If you guys enjoyed this video, why not check out the tour we did of Triumph,

257
00:19:24,160 --> 00:19:31,520
the particle accelerator that is just down the road. Well, down the hill, down

258
00:19:28,960 --> 00:19:35,120
the really long road. Canada's only road. It's pretty long.
