The following descriptions should be enough to setup the testbed and rerun the experiments using CCC. Testbed usage ======================================================================= Initial setup of the testbed -------------------------------------------------- The testbed is created using CORE to set up a virtual network. Three machines were used: Pengolodh is the router, Hildigard is the source, Wilcome is the sink. The CORE setup can be found here: [configuration](cc.imn) ![Topology](topo.png) Start CORE. Load the cc.imn file. Press the green play button to start the network emulator. Wait for boxes to disappear, and the routing should converge soon. Start Hildigard and Wilcome if not running, else run `dhclient em0` on both machines to receive IP addresses. Check that the machines have the following IP adresses: * Hildigard: 10.0.1.7 * Wilcome: 10.0.0.5 If this does not match, swap the RJ45 connectors on the router machine. To connect the router machine to the emulated network, execute the `dummysetup_cc.sh` script found in `~/cc`, provided here: ```sh DUMMYBRIDGE=`brctl show | grep dummy0 | cut -f 1` sudo ifconfig $DUMMYBRIDGE inet 10.0.6.135/24 #If it exists we need to purge bad route: sudo route del -net 10.0.6.0/24 dummy0 2> /dev/null sudo route del -host 10.0.0.5 2> /dev/null sudo route del -host 10.0.1.7 2> /dev/null sudo route del -host 10.0.1.128 2> /dev/null sudo route del -host 10.0.1.129 2> /dev/null sudo route del -host 10.0.1.130 2> /dev/null sudo route del -host 10.0.1.131 2> /dev/null sudo route add -host 10.0.0.5 gw 10.0.6.1 sudo route add -host 10.0.1.7 gw 10.0.6.1 sudo route add -host 10.0.1.128 gw 10.0.6.1 sudo route add -host 10.0.1.129 gw 10.0.6.1 sudo route add -host 10.0.1.130 gw 10.0.6.1 sudo route add -host 10.0.1.131 gw 10.0.6.1 ``` Experiment execution ---------------------------------------------------------------------- Experiment scripts can be found in `~/cc/eval_tools/control`. Each shell/python script is a separate experiment. To run an experiment, e.g. the experiment using NEAT and coupled CC, execute: `./neat_cc.sh`. Traces are saved to `~/cc/traces/`, using a combination of the experiment name and the current date/time. Experiment script structure ---------------------------------------------------------------------- Each experiment consists of a control script in `~/cc/eval_tools/control`, and up to three scripts in both `~/cc/eval_tools/source` and `~/cc/eval_tools/sink`. In the `source` and `sink` folders, the files have the suffix `_pre.sh` if the file is executed before the experiment starts, the suffix `.sh` if it is the actual experiment, or `_post.sh` if it is executed after the completion of the experiment. These scripts are copied to the source and sink machine automatically as part of the experiment execution. Post-processing ---------------------------------------------------------------------- Plotting scripts can be found in `~/cc/`. To plot the cwnd from a SIFTR log file, execute `python plot_cwnd.py path/to/siftr.log output.png`. Building and deploying a FreeBSD kernel on the testbed ---------------------------------------------------------------------- - Push the files from any machine where the code is written to the virtual machine running FreeBSD used for compiling the kernel. ```sh while true; do ssh -R 1999:localhost:22 oystedal@vetur.ifi.uio.no nice -11 bash; sleep 5; done ``` This script sets up a reverse SSH tunnel from a machine (gutu), so that this machine can be reached from any other machine through `vetur.ifi.uio.no`. To connect to this machine, SSH to `vetur` and then execute `ssh kah@localhost -p 19999`. The following script is used to push files to the virtual machine: ```sh #! /bin/env bash #rsync --info=progress2 -r --exclude=".git" -e "ssh -A oystedal@vor.ifi ssh -p 19999" ~/nd/freebsd11/ kah@localhost:freebsd11-toasty rsync --info=progress2 -ra --checksum \ --exclude=".git" \ --exclude="bin" \ --exclude="cddl" \ --exclude="contrib" \ --exclude="COPYRIGHT" \ --exclude="crypto" \ --exclude="etc" \ --exclude="gnu" \ --exclude="include" \ --exclude="kerberos5" \ --exclude="lib" \ --exclude="libexec" \ --exclude="release" \ --exclude="rescue" \ --exclude="sbin" \ --exclude="secure" \ --exclude="share" \ --exclude="targets" \ --exclude="tests" \ --exclude="tools" \ --exclude="usr.bin" \ --exclude="usr.sbin" \ -e "ssh -A oystedal@vetur.ifi.uio.no ssh -A kah@localhost -p 19999 ssh" ~/nd/freebsd11/ root@192.168.122.31:freebsd11-toasty ``` - Execute the build script on the virtual machine. The final output is copied to `/root/kernel/`. ```sh #! /usr/bin/env zsh cd /root/freebsd11-toasty; NO_CCACHE=yes make -j16 NO_KERNELDEPEND=1 NO_KERNELCLEAN=1 NO_SHARE=1 MK_CTF=no buildkernel if [[ $? -ne 0 ]]; then exit -1; fi sleep 3 cd /root/freebsd11-toasty; NO_CCACHE=yes make DESTDIR=/root/kernel installkernel ``` - Pull the kernel files from the VM to Pengolodh (the router/control machine). ```sh #! /usr/bin/env bash rsync --info=progress2 -rva --checksum -e "ssh -A oystedal@vetur.ifi.uio.no ssh -A kah@localhost -p 19999 ssh" root@192.168.122.31:kernel . ``` - Execute the `deploy.sh` and `reboot.sh` scripts. ```sh #! /usr/bin/env bash PIDS="" rsync -rav --progress --checksum kernel/ root@10.0.0.5:/ & PIDS="$! $PIDS" rsync -rav --progress --checksum kernel/ root@10.0.1.7:/ & PIDS="$! $PIDS" wait $PIDS ``` ```sh #! /usr/bin/env bash nohup ssh root@10.0.0.5 reboot > /dev/null 2> /dev/null & nohup ssh root@10.0.1.7 reboot > /dev/null 2> /dev/null & ``` - Confirm that the kernel is recent by executing `uname -a` on both machines. - Run experiments as described above.