Linux.Conf.Au 2016 started today, in Geelong, Vic, with the first of two days of Miniconfs; the second day of Miniconfs is tomorrow and I will only see the Sysadmin Miniconf which I am helping to run.
I spent the day between the Multimedia and Music Miniconf and the Open Cloud Symposium, with a bit of "hallway track" time thrown in.
Bdale Garbee: An Open Approach to Whole-House Audio
Bdale recently had to rebuild his house from scratch, because reasons. One of the many opportunities presented, was wiring for "whole house" audio. Since it turned out that the audio controller offered was an obvious GPL violator, and Bdale has a long Open Source history, he decided to make his own audio controller.
His design uses:
runing Mopidy
sending audio to 9 custom designed/built USB connected Class D amplifier boards
using the TI PCM2705C USB DAC (Digital/Analogue convertor) and TI TPA3118D2 30W stereo (Class-D) amplifier
Due to Mopidy design choices they actually run 9 Mopidy servers (one for each set of room speakers), and mostly use their phones as the control interface (with a native app).
There were some configuration quirks to deal with (including hard coded accounts for Music streaming services), and amplifier power down only happening when USB power down happened (which is not the Linux default), but basically the project solved at least 90% of the family's needs.
(Bdale has some more information on his website; ETA 2016-02-02: LWN also wrote about this talk this week)
Casey West: The 12-Factor Container
Casey West applied the 12 Factor App development methodology to building application containers. The analogy works fairly well, and much of it is obvious.
Amongst Casey's key points:
Run the same image in dev, staging, and production (although possibly dev/staging will be running newer versions of that same image, but once approved the exact bit-identical image will go into production)
Keep dev, staging, and production as similar as possible. In particular run container versions of your dependencies in development as well, to minimise differences.
Use environment variables (and feature flags) to control what happens in each environment; do not use a
config.yaml
or similar that sits within the image. (For more complex things consider some sort of configuration server; maybe something like Netflix's Eureka, which is designed for Amazon AWS deployments.)Always specify the precise version of dependencies you want; never say "latest".
Build a "base file system image" (probably your own, unless you really trust some third party one), and then make everything else extend that. It greatly simplifies patching, updates, etc.
There Is No Local Disk (tm). All persistent storage needs to go somewhere else -- eg into some network storage service that you locate based on information from the environment. (Amazon RDS and Amazon S3 were mentioned as good choices in hallway track discussion about this; if you are already hosted in Amazon AWS.)
Do not install on deploy. Build, then deploy, as separate phases. That way all instances of a given container are the same.
Schedule conatainers on multiple pieces of hardware, to smoke out "same hardware" assumptions.
Debugging production problems can be tricky. Some solutions include migrating the container state somewhere else to inspect, and taking affected container out of the pool (eg, off the load balancer) and then inspecting separately. When done digging around, stop that container and start a new one -- the one you have been digging around in is no longer "pristine" and may behave differently as a result.
Joel Addison: Conference Recording 2.0: Building a Better System
The previous (eg, LCA) conference system was based on:
EventStreamr, forked from PLUG's EventStreamr, for coordination (note the missing "e" at the end; presumably done to make it easier to search for, but makes it nearly impossible to find if you do not notice that...)
TwinPact VGA scan converters to capture slides, which are effectively VGA to DV over Firewire adapters
Multiple laptops to capture DV streams from cameras/the TwinPact
dvswitch, to mix streams, live
but the main disadvantage is the hardware is getting old and hard to obtain (including TwinPacts and laptops with Firewire support), and it is only "Standard Definition" video, not High Definition (ie, SD not HD).
The newer system (being trialed at LCA2015/LCA2016) is:
HDMI input only (with VGA only via an active VGA to HDMI adapter); the use of Video over USB Type C is being considered as a future extension.
Using Dilagent Atlas and more recently Numato Opsis boards for the HDMI capture; they are both FPGA accelerated HDMI decode boards, and software for both is still under active development.
recording and mixing managed with Voctomix from the Chaos Computer Congress Video team.
with a "version 2" EventStreamr, which is also still work in progress (used for PyCon.Au 2015?); after being transliterated into Python and updated. Amongst other things it uses MoviePy to manage the video encoding.
Tycho Andersen: Live Migration of Linux Containers
The talk covered the history of container migration (starting with a huge kernel patch to do it all in-kernel, which was never merged) through to the current situation where a whole bunch of kernel helpers (around 250 patches) enables process/container migration (from one host to another) to be done for many programs without lots of special hardware dependencies (eg, audio hardware state cannot be migrated).
The talk approached the topic from the
lxd "container
hypervisor" point of view, focusing on the lxd live
migration
aspects. But lxd and other similar Linux process/container "migration"
technology have now largely settled on CRIU
-- Checkpoint/Restore in Userspace as their technology to do the
migration; apparently CRIU is very fiddly to use without one of the
higher level front end tools (like lxd
assisting).