Linux.Conf.Au 2016 happened last week, in Geelong, Vic; I wrote about the first day of LCA2016 during the conference, but then the conference activities took over and there was not much time for additional posts. However I wanted to summarise some of my (paper) notes for future reference -- hence this post.

Recordings of most of the talks are up on YouTube and the Linux.Org.Au Conference Video Mirror now (or will be soon) so most of these talks can be watched again. The LCA2016 Programme should give a good idea of what is available (and eventually will hopefully contain direct links to slides and video).

All the recordings are 720p HD (all slide input was via HDMI, which caused a few challenges -- including the Sysadmin Miniconf losing 15 minutes from the schedule trying to make multiple laptops work, which turned out to be a particular USB3/HDMI dongle which caused the recording system to crash). Sadly my 8+ year old travel laptop barely copes with playing 720p -- it can just keep up if the video is pretty still (eg, just slides, or very little movement) but the video is very "stop motion" if there's lots of movement (such as a speaker walking around/waving their arms as they talk). So watching videos while still travelling is tricky; I think this year will be the year I finally buy a lighter, faster, "travel laptop".

Day Two (Tuesday) -- Sysadmin Miniconf

On Tuesday I helped Simon Lyall run the Sysadmin Miniconf Programme. Despite 4 speaker cancellations before the conference week, 1 on the Sunday before, and 1 on the morning of the Miniconf, we managed to start on time, and finish on time and have a full programme (other than 15 minutes -- an entire talk worth of time -- lost to video connection issues).

I did not take any notes during the Sysadmin Miniconf -- most of my spare time was used ensuring that we had all the slides for the 2016 Sysadmin Miniconf linked to the programme during the Miniconf (success!).

Fortunately Simon Lyall took detailed notes of most of the talks:

I hope to see some of the talks from the other Tuesday Miniconfs on video.

Day Three (Wednesday)

Caterina Mota's Keynote

From Caterina Mota's keynote I took away two things:

Using Linux Features to make a Hacker's Life Harder

Kayne Naughton introduced the "6 Ds" of security defence, which originate with the military:

  • Detect

  • Deny (eg, firewall)

  • Disrupt (eg, data loss prevention)

  • Degrade (eg, make information obtained less valuable)

  • Decieve (eg, fake data)

  • Destroy (a very military solution!)

He linked that to Lockheed Martin's Cyber Kill Chain (whitepaper (PDF)), which describes a series of stages of a "cyber" (online) attack:

  • Reconniassance

  • Weaponisation

  • Delivery

  • Exploitation

  • Installation

  • Command and Control

  • Actions on Objectives

with the idea that the earlier in an attack you can deny, disrupt or deceive the attack, the easier it will be to stop. (For instance it is well known that it is possible to enumerate many staff of many large organisations by careful searching on LinkedIn... and one possible deception is to create some fake profiles, and pay particular attention when those are being probed.)

Other suggestions:

  • Scapy can be used to forge reply packets to cause confusion

  • shodan.io is often used for reconnissance and as a White Hat site reveals itself, so watching for probes from Shodan.io can be a useful heads up

  • metaspolit's meterpreter defaults to TCP/4444 for back connections, so that can be a useful destination port to watch for on your network

  • Tarpit an attackers traffic by rate limiting, with low rate packet drop

  • Use a union file system and inotify to allow writes, but then immediately (or after a delay) auto-remove them, to frustrate the attacker; they also had a fake Python interpreter that generated fake errors (eg about indents) to waste the attacker's time

See also the speaker's links to things discussed in the talk. Including SQLMAP for automating extracting data via SQL injection.

Education and the AGPL: A Case Study

Molly de Blanc did a good job of describing the EdX project ecosystem, at length. EdX started as a MIT/Harvard collaboration for online education, and is now used at dozens of sites around the world.

They appear to have created a good community that contributes to the project both in the form of core contributions (under the AGPL -- so source required, even if just used in a public hosting environment) and in the form of add-ons (under the Apache licence, so source not required).

The talk abstract had the thesis that the AGPL encouraged or created this open community, but unfortunately the talk did not really address this -- license enforcement was barely mentioned, and it appears (like many projects) they are reluctant to robustly enforce their license at the risk of alientating users/contributors. It also appears that they get roughly similar levels of contributions to the core (AGPL, which requires it), and in the form of addons (Apache licence, which does not require it).

Ultimately it was unclear to me whether "create a sharing community" or "the AGPL terms" contributed more to the sharing of source code; and I felt the talk given was not really the one described in the abstract.

Synchronised Playback with GStreamer

Synchronised playback relies on a synchronised clock against which everything can be scheduled, and timecode information embedded in the media streaming to be able to track relative time position in the media.

GStreamer supports three clock time synchronisation methods:

All of them use a master clock to synchronise to, and attempt to estimate the (network, etc) delay between two hosts, and adjust the received time to account for the network delay. PTP's main differnce is that it mostly uses multicast of the time signal, with only periodic two way traffic to measure the network time delay (which is then assumed to be consistent; PTP works best on local wired networks).

Streaming media is then negotiated via RTSP (Real Time Streaming Protocol), and SDP (Streaming Description Protocol). The actual media is sent over RTP (Real-Time Transport Protocol), which is essentially UDP with an additional header with timing information.

Synchronisation is then performed by rtpjitterbuffer, which can smooth out the incoming stream (by using a buffer) for time locked playback.

Inter-stream synchronisation requires more -- RTCP (RTP Control Protocol provides additional out of band information that allows mapping the stream clock to a shared wall clock (NTP clock, etc), so that they can be sychronised. (RFC 7273 allows signalling which clock is used.)

(The speaker also noted that professional time sychronisation standards like SMPTE 2022/SMTPE Timecode, Ravenna, and AES-67 all work in a similar manner.)

Adventures in OpenPower firmware

IBM's OpenPower project has been steadily open sourcing the low level firmware code that runs their modern Power (CPU) based systems.

This includes:

and lots of documentation.

skiboot implements the OPAL (Open Power Abstraction Layer), which is essentially the runtime library for the Operating System to use.

The future belongs to Unikernels

The speaker admitted fairly early on that the full talk title ("soon Linux will no longer be used in Internet facing production systems") was basically "click bait" -- but the talk did still provide a useful overview of "Unikernel" systems. Essentially single-application machines.

The ones mentioned:

  • MirageOS, in O'caml

  • HalVM, in Haskell

  • Drawbridge from Microsoft Research

  • Ling, in Erlang

  • RuntimeJS, in Javascript

  • OSV, which is a stripped down FreeBSD that targets only Xen; runs the JVM and Posix applications

  • Rump Kernel, which is stripped down NetBSD with userspace drivers; some of the other Unikernels will run under the Rump kernel; runs Posix applications

Towards the end it was mentioned that someone is working on a Linux-based Unikernel approach, with the LKL (Linux Kernel Library).

From the questions it was clear the audience was not convinced that Unikernels would replace Linux; but it was clear that many people agreed that immutable infrastructure was a useful step where possible. (Which can be achieved through a variety of means, including Containers).