Database of IP address allocations, VLAN tables, network maps, etc.
Vous ne pouvez pas sélectionner plus de 25 sujets Les noms de sujets doivent commencer par une lettre ou un nombre, peuvent contenir des tirets ('-') et peuvent comporter jusqu'à 35 caractères.

itl-patch.md 8.8KB

itl-patch.md

This set of tables shows where the ITL patch panels are connected as of the new resructuring in Fall 2018.

ITL Network Diagram

Implementation Details

This is the current patching diagram, showing the color of the cable, the group it travels with, and the switch it terminates at.

There are 4 dead ports, and 4 unused ports (which are temp ports for courses, presentations, and projects that exist near the front of the ITL, R3-R6). There are also 2 special ports that are NOT TO BE TOUCHED - these are R1 and R2.

Row 1: A1 - F4

Port Switch Color Grouping
A1 ITL 1 Green A1 - B4
A2 ITL 1 Green A1 - B4
A3 ITL 1 Green A1 - B4
A4 ITL 1 Green A1 - B4
B1 ITL 1 Green A1 - B4
B2 ITL 1 Green A1 - B4
GAP --- --- ---
B3 ITL 1 Green A1 - B4
B4 ITL 1 Green A1 - B4
C1 ITL 1 Green C1 - D2
C2 ITL 1 Green C1 - D2
C3 ITL 1 Green C1 - D2
C4 ITL 1 Green C1 - D2
GAP --- --- ---
D1 ITL 1 Green C1 - D2
D2 ITL 1 Green C1 - D2
D3 ITL 1 Green D3 - E4
D4 ITL 1 Green D3 - E4
E1 ITL 1 Green D3 - E4
E2 ITL 1 Green D3 - E4
GAP --- --- ---
E3 ITL 1 Green D3 - E4
E4 ITL 1 Green D3 - E4
F1 ITL 2 Purple F1 - F4
F2 ITL 2 Purple F1 - F4
F3 ITL 2 Purple F1 - F4
F4 ITL 2 Purple F1 - F4

Row 2: G1 - L4

Port Switch Color Grouping
G1 ITL 2 Purple G1 - G4
G2 ITL 2 Purple G1 - G4
G3 ITL 2 Purple G1 - G4
G4 ITL 2 Purple G1 - G4
H1 ITL 2 Purple H1 - I2
H2 ITL 2 Purple H1 - I2
GAP --- --- ---
H3 ITL 2 Purple H1 - I2
H4 ITL 2 Purple H1 - I2
I1 ITL 2 Purple H1 - I2
I2 ITL 2 Purple H1 - I2
I3 ITL 2 Purple I3 - J4
I4 ITL 2 Purple I3 - J4
GAP --- --- ---
J1 ITL 2 Purple I3 - J4
J2 ITL 2 Purple I3 - J4
J3 ITL 2 Purple I3 - J4
J4 ITL 2 Purple I3 - J4
K1 ITL 3 Blue K1 - L4
K2 ITL 3 Blue K1 - L4
GAP --- --- ---
K3 ITL 3 Blue K1 - L4
K4 ITL 3 Blue K1 - L4
L1 ITL 3 Blue K1 - L4
L2 ITL 3 Blue K1 - L4
L3 ITL 3 Blue K1 - L4
L4 ITL 3 Blue K1 - L4

Row 3: M1 - Q4

Port Switch Color Grouping
M1 ITL 3 Blue M1 - N1
M2 ITL 3 Blue M1 - N1
M3 ITL 3 Blue M1 - N1
M4 ITL 3 Blue M1 - N1
N1 ITL 3 Blue M1 - N1
N2 N/A BAD PORT
GAP --- --- ---
N3 ITL 3 Blue N3 - O4
N4 ITL 3 Blue N3 - O4
O1 ITL 3 Blue N3 - O4
O2 ITL 3 Blue N3 - O4
O3 ITL 3 Blue N3 - O4
O4 ITL 3 Blue N3 - O4
GAP --- --- ---
O5 N/A BAD PORT
O6 N/A BAD PORT
P1 ITL 4 Green P1 - P5
P2 ITL 4 Green P1 - P5
P3 ITL 4 Green P1 - P5
P4 ITL 4 Green P1 - P5
GAP --- --- ---
P5 ITL 4 Green P1 - P5
P6 ITL 1 Green P6
Q1 N/A BAD PORT
Q2 ITL 2 Purple Q2 - Q4
Q3 ITL 2 Purple Q2 - Q4
Q4 ITL 2 Purple Q2 - Q4

Row 4: Q5 - U4

Port Switch Color Grouping
Q5 ITL 4 Green Q5 - Q6
Q6 ITL 4 Green Q5 - Q6
R1 Note 1 White R1
R2 Note 2 Orange R2
R3 N/C N/C
R4 N/C N/C
GAP --- --- ---
R5 N/C N/C
R6 N/C N/C
S1 ITL 4 Green S1 - S4, Q5 - Q6
S2 ITL 4 Green S1 - S4, Q5 - Q6
S3 ITL 4 Green S1 - S4, Q5 - Q6
S4 ITL 4 Green S1 - S4, Q5 - Q6
GAP --- --- ---
S5 ITL 4 Green S5 - T4
S6 ITL 4 Green S5 - T4
T1 ITL 4 Green S5 - T4
T2 ITL 4 Green S5 - T4
T3 ITL 4 Green S5 - T4
T4 ITL 4 Green S5 - T4
GAP --- --- ---
T5 ITL 4 Green T5 - U4
T6 ITL 4 Green T5 - U4
U1 ITL 4 Green T5 - U4
U2 ITL 4 Green T5 - U4
U3 ITL 4 Green T5 - U4
U4 ITL 4 Green T5 - U4

Notes on Ports R1 and R2

  • R1: For Polycom. To VoIP box, and then to OIT sc-334-c2960, port 22
  • R2: For Echo360. To OIT sc-334-c2960, port 21

Row 5: U5 - U6

Port Switch Color Grouping
Q5 ITL 1 Green S1 - S4, Q5 - Q6
Q6 ITL 1 Green S1 - S4, Q5 - Q6

Inception and Planning

In the Fall of 2018, it was suggested by Ryan S. (aka, Stew) that we re-wire the ITL. Jared D. had been itching to do this for over a year, and so, since freshman were handy, we began at around 6pm by removing the existing cabling, and that did not take very long. We spent at least an hour or two coming up with the way we wanted to set up the ITL, since we could do many different things with it.

Constraints

The main constraints were as follows:

  • Rewire the ITL in such a way that there are at most 6-7 computers per switch
  • Be able to split the ITL into 2-4 parts during competitive hackathons
  • Be able to completely isolate the ITL from the rest of the COSI network with only 4 uplink cables
  • Restrict port utilization on the switches to allow all 24 port switches

Reasoning

The reasoning for the constraints is as follows:

Only 6-7 Computers per Switch

We wanted to have only 6-7 computers per switch for various reasons. Firstly, it allows the dumb 1Gb/s switches (which can only have one 1Gb/s uplink per switch). Since the uplinks from ITL 1 through ITL 4 are all connected directly to swm1, and that switch has a 4Gb/s LAG to swf1 (the 10Gb/s fiber network), this affords each computer the fastest shared connection to the rest of the COSI and the campus network as possible. This is particularly useful for the ITL deployments - we reduced the time from ~8 hours down to ~1.5 hours of clone time by just re-balancing the cables, and by using NFS for the cloning process instead of SSHFS.

Split the ITL into 2-4 parts

During hackathons, we would like to be able to split the room up. The idea is that we could split two (or more) parts of the ITL against one another on separate servers on separate subnets, or keep them from interacting entirely, using a NAT.

Be able to isolate the ITL from the COSI network using 4 uplink cables

In 2017, Jared D. removed the old air-gapped ITL network. In 2015, it used to be that each computer had two network interfaces - one on the public internet, and one on the airgap network. The idea was that you would disconnect the ITL from the public internet, and just use the existing airgap network to spread viruses within the ITL. There was an incident with an IPv6 bug in Windows at the time, and a lab that was supposed to only affect the ITL, since it was supposed that the network was disconnected, but it was not disconnected correctly, and instead crashed the networking stack for every single Windows computer on the entire campus. The reason for this occurence is not known, but my guess is that a port died on the airgap network, or a computer got unplugged and replugged. Whatever the reason, a computer was still connected to the public internet.

My resolution for this is threefold - make the ITL documentation as clear as possible, and stick with it, and also to make sure that there is no “airgap” network mistake to be made - if you need an airgap network, use the network you have, and disconnect it entirely from the internet. The other major problem we had is that since there were two network interfaces live, some PXE images failed because they would boot from one, and try to get the image from the other. This obviously would cause the boot to fail, so the airgap switch had to be kept off when this happened. Add to this that the airgap switch was an old 100Mb/s switch, and there were more than enoguh reasons for its deprecation entirely.

Restrict port utilization to 24 port switches

Hardware in COSI tends to go bad over time. 48 port switches are hard to maintain, dumb switches have major problems with bandwidth, since they can only have one (non LAG’d) connection to the uplink, this creates many problems. 24 port switches are also more common and easier to find, and cheaper, than 48 port switches. Even though we have a 48 port switch, I would feel that should it ever die, it would be annoying to find a quick pair of 24 port replacements to fill it’s place. This also means that in the future, if we decide to upgrade swm1 through swm4, we can use managed switches on the ITL.

Other Planning

We wanted to make sure that the ports near the computers were as similar to the computers near it as possible. This means that if a port dies in the wall, that the computer can be relocated to a physically nearby port, and be able to be used without ending up on a complete other network. This also means that duing competitive events, people can’t just patch into another network without having to be physically in another part of the room. While not optimal, we have tried our best to fulfill this design.