Fundamental Interconnectedness

This is the occasional blog of Ewen McNeill. It is also available on LiveJournal as ewen_mcneill, and Dreamwidth as ewen_mcneill_feed.


I have wanted a 3D printer ever since Vik Olliver spoke at Linux.Conf.Au 2006 about the RepRap project -- a 3D printer that "prints itself" -- and then showed working models at later Linux.Conf.Au conferences. Vik went on to spend years working on the RepRap project, helping open up 3D printing to makers.

The whole idea of a "self replicating machine" was inspiring,and over the years I have seen many makers produce lots of custom one off designs using 3D printers, as the technology has matured. However I had no practical use for a 3D Printer, particular at the early innovation stage, and so I just looked on in admiration from a distance.

Over the past couple of years FDM 3D printing ("Fused Deposition Modelling", sometimes called "Fused Filament Fabrication" or "FFF") has matured to the point that there are numerous 3D printing machines, many of which descend directly or indirectly from the RepRap project.

The "Original Prusa I3 MK3s" is one of the best known for maximum refinement of the RepRap deisgn, starting with the Prusa Mendel, and has great reviews. But it is also US$750 even as a DIY kit, plus shipping, and it's costly enough to require paying GST on the way into New Zealand as well (so all up NZ$1350 or so). Which still was more than I could justify for an something to use occasionally to make custom parts, and "experiment with 3D design/printing".

Creality Ender 3

After a bit of a rocky start (with controversy around source license compliance, and quality control/cost optimisation choices), the Creality Ender 3 has emerged as one of the most popular, highly reviewed budget 3D printers, and considered a "good printer for the price" particularly when it is on sale (Ender 3 Unboxing).

I have been aware of Creality for a couple of years, ever since Naomi Wu started working with Creality to improve their relationship with the western maker community (both Creality and Naomi are based in Shenzhen, Guangdong, China). Naomi's work led to the Creality Ender 3 being one of the first Open Source Hardware Association Certified projects out of China (Ender 3 OSHWA certificate -- CN000003; see also Naomi's announcement video), with the firmware source code, PCB design, and mechanical hardware all on GitHub. (Naomi is also responsible for OSHWA CN000001, and encouraging OSHWA CN000004 and OSHWA CN000005 -- the first for her sino:bit project, and the other two for the Creality CR-10 3D Printer.)

A few weeks ago, reminded by the USA "4th of July" sales, I happened to see the Creality Ender 3 on sale at Kiwi 3D for NZ$500, including "free" shipping and GST, and finally ordered one -- spending the $NZ$100 discount on some PLA filament to have something to start testing the printer with. (While the printer was available from overseas, including direct from Creality for cheaper -- US$189.99 is a common sale price -- by the time that shipping and GST at import were added, the sale price in New Zealand was was less than 20% more than the likely landed price importing it myself -- and by buying from a New Zealand retailer it arrived just over one business day after ordering, with much less hassle.)

The Creality Ender 3 arrives as a "partially assembled" kit -- the base print bed / Y axis, and the extruder unit are assembled, and the remainder is flat packed for end user assembly. I spent an afternoon carefully assembly the kit, following the Tomb of 3D Printed Horrors "Creality Ender 3 assembly and pro build tips" video guide, as well as the "Creality Ender-3 Official Assembly Instruction" PDF (downloaded from Creality's site); the PDF guide has more instructional text than the provided "one large sheet" diagram.

Other than challenges identifying the correct parts needed at each step (there are a lot of subtly different bolt sizes for instance and extras supplied for many of them), the assembly went fairly smoothly -- and the "Tomb of 3D Printed Horrors" build video helped identify some things to check during the build at times when they were easy to change, avoiding finding those issues later when they were more difficult to correct. (Of note, my base arrived so that it was slightly wobbly on a flat surface, and a couple of the bed levelling knobs and springs had come loose during shipping; but putting the assembled unit on a flat surface and slightly loosening the two bolts holding the right -- LCD panel -- side base on to the unit was enough to allow everything to drop into alignment with no wobble, and the bed levelling springs and knobs were easy to put back onto the base. The only other surprise I found was that the control box fan does not spin all the time, seemingly not even at the start of a print -- but it definitely spins once the printing is under way, so presumably there is a thermal sensor for the control box fan now too. My XT60 power supply connectors appear to be genuine, which was not true of all earlier Creality Ender 3 printers due to a quality control issue.)

After assembly, and manual bed levelling, my Ender 3 printed the "Tomb of 3D Printed Horrors" bed levelling test model (downloaded from their DropBox) successfully, and I declared success on the initial assembly. (The "bed levelling test" was supplied as G-Code, which was convenient for just getting started -- but fortunately also short enough I could manually review it to determine it was safe before sending it to my printer; as Thomas Sanladerer points out G-Code can include instructions which make changes that persist over power cycling the device, or attempt to cause mechanical damage, so verifying G-Code is important if it comes from an unknown origin.)

Ultimaker Cura

The usual exchange format for 3D printing models is STL, and this is what is available from, eg, Thingiverse and other 3D model sites. STL is a widely supported 3D model format that describes 3D objects in terms of meshes of triangles (and is encoded either in ASCII, or in a binary format; not all STL files are 3D printable but most 3D printable models are provided as STL files).

For 3D FDM printing, the STL model needs to be translated by a "slicer", which decides how to move the X / Y / Z axes of the printer to produce a physical representation of the model, and produces a GCode file to instruct the printer on what to do. In software terms the STL file is the source code, the slicer is the compiler, and the GCode is the executable file (which is one reason why GCode from unknown sources cannot be completely trusted; see above).

There are several Open Source 3D slicers, including:

I decided to start with Cura because Maker's Muse found that the Cura slicer chose movement paths that better reduced filament stringing on the Creality Ender 3, Thomas Pietrowski maintains an Ubuntu PPA for Cura (stable version), and several makers published Cura profiles for the Ender 3 that worked for them, so it seemed like a good place to start. While writing this blog post, I also found another guide to tuning Cura for the Ender 3 -- 6mm extruder retraction at 25mm/s, without z-hop, seems to be the magic setting to reduce stringing, resulting in a slight "nozzle wipe" over the part. (These days Creality ship a Microsoft Windows only "Creality Slicer" which I have not used; but as far as I can tell Creality used to ship with Cura, which is another reason to start with Cura for the Ender 3.)

Unfortunately the latest releases of Cura 4.x will not run on older Ubuntu versions, including Ubuntu 16.04 LTS and Ubuntu 18.04 LTS, because they rely on newer QT library functions. However with a bit of tweaking, I was able to get Cura running in an Ubuntu 19.04 ("Disco") Docker image, using Thomas's Ubuntu PPA packages, following the approach of Steve Baker, but updated to a later Ubuntu version and a later Cura version, and with some startup tweaks. (My cura-docker repository.)

For now I have simply used the built in Creality Ender 3 profile provided with Cura 4.1, for PLA filament, which prints at 200C with a 60C bed. It seems to work reasonably well for me, with the "Kiwi 3D PLA Filament". (I did notice that Cura wanted to "phone home" with usage information, which I dislike, but it is possible to disable the feature early in the application setup by clicking on "more information" when the panel advising it will "send anonymized data" and then choosing to disable that feature, and that setting appears to be persisted in the Cura preferences.


In order to have the whole 3D printing experience, I wanted to try designing my own 3D model and printing it. As with most beginning 3D printers, in the true RepRap fashion, the first one does is print something to improve the printer. Since the manual bed levelling is a common issue in Ender 3 printers, and the relatively extended, relatively thin springs are commonly identified as something to upgraded, I decided to tackle that issue first by creating a shim to ensure the springs were more compressed. This had the advantage of being a trivial 3D object to model -- a small washer -- which was a great "first project". (While writing this blog post I also found someone created an Ender 3 Bed Spring Stabliser, which combines the shim with an inner spring support to fill the gap between the inner bolt and the outer spring; which I might investigate further later.)

Since I am a programmer at heart, and I was designing a simple mechanical part, I decided to use OpenSCAD to model the part (conveniently it was packaged for Ubuntu 16.04 LTS, in the Ubuntu repository, so I could just install it; that is a somewhat old version but sufficient for my simple initial needs). OpenSCAD is a compiler for a parametric [modelling] language -- it translates .scad source files containing combinations of simple objects into (ASCII) STL files. Following the first couple of parts of a four part OpenSCAD tutorial got me started pretty quickly (note: there is no fifth part; by the time they finished the fourth part they decided a fifth part was not needed...).

After some experimentation I came up with this simple OpenSCAD model:

$fa=2;   // default facet angle
$fs=1;   // default facet size

// Parametric shim washer
// Measurements are diameter.  We divide them in half to get required radius.
module shim_washer(od, id, height, offset=0.5) {
    difference() {
        cylinder(r=(od/2), h=height);
        translate([0,0,-offset]) cylinder(r=(id/2),  h=height+(2*offset));

shim_washer(od=13, id=6, height=2.75);

which I could load into OpenSCAD (using vim for editing, rather than the built in editing pane which can be closed), preview it, then render it (F6), validate it, and then export it as a STL file (File -> Export -> Export as STL...), and then load into Cura for "slicing" to convert it to GCode. (Conveniently, both OpenSCAD and Cura default to auto-reload of the model when their relevant files are written, which means having vim, OpenSCAD, and Cura all open, and saving changes as they are ready works quite well; the one issue I found was that while Cura will auto reload models, if you have duplicated a model for printing, it appears to only reload one of them :-( So I spent a while clearing the build bed and re-duplicating / re-laying out three of them for printing efficiently at once.)

It took a few attempts to get reasonable sized shims to put under the springs -- my first attempt confused diameter and radius resulting in washers much too wide, my second had the inner hole a little too small, my third had the shim a little too high (at 5mm), and the fourth worked. But each attempt only took about 20 minutes, even with beginner stumbling blocks, so it was fairly quick to iterate to a useful solution. I ended up printing only three shim washers, for the three corners without the bed power support bar -- and settled on the 2.75mm as being approximately the same height as that bed support bar. (If I were doing it again I might pick a 3mm height, and would probably also try a 5mm inner diameter; 4mm was definitely too tight, but 6mm is a little loose.)

Overall I was pleased to be able to go from conception to something that is now installed -- printed in black PLA, with 0.2mm layers and 70% infill for strength -- on my 3D printer. (I installed the shims at the bottom of the springs, immediately above the print bed support base, which should leave them more than far away from the bed heater that they do not get hot; and besides the bed will usually only be 60C, which is well under the PLA melting temperature.)

Next steps

Most likely my next step will be to install OctoPrint, probably via OctoPi on a Raspberry Pi 3B that I have sitting around, to enable "network printing". SneakerNetting images to print back and forth between my computer and the Ender 3 gets old quickly, and the Ender 3 "TF" ("TransFlash", aka MicroSD) slot is a bit fiddly to reach anyway, in addition to the need to put it into/take it out of a USB adapter.

Possibly then followed by 3D printing a few more of the "recommended upgrades" for the Ender 3, such as a filament guide (one of the main design flaws of the Ender 3 is that, as supplied, the filament is practically certain to drag along the Z axis lead screw in normal use, picking up oil -- and removing needed oil from that lead screw).

I might also upgrade the Marlin Firmware (on GitHub) on my Ender 3 with a more recent version, as I believe Creality are still shipping firmware based on the Marlin 1.0.1 firmware, but there is Marlin 1.1.x firmware including Ender 3 example configuration available now. The Creality Ender 3 mainboard apparently does not include an Arduino Bootloader (for space reasons), but it is possible to use another Arduino as an In-circuit Serial Programmer and I have a couple of suitable Arduino boards available. Being able to upgrade the firmware is one of the advantages of Open Source hardware :-)

Posted Sun Jul 21 19:05:07 2019 Tags:


About 2.5 years ago I bought a Dell XPS 9360 laptop, so that I had a modern Linux laptop to take to conferences. At the time I was optimising for light weight, and relatively low cost, so went with an i7 system, with 8 GiB RAM, and 256 GiB M.2 storage system.

It came with Windows 10 Home (Pro cost extra :-) ), and I configured it to dual boot Windows 10 and Ubuntu Linux 16.04 LTS because I wanted to mostly run Ubuntu Linux, but still have the opportunity to run Windows 10 occasionally (since I have no other Windows systems).

Because I have mostly used the Ubuntu Linux 16.04 install for FPGA development, across a variety of vendors (Xilinx, Lattice) and FPGA models (Spartan6, Artix7, iCE40, iCE40 UP), all of which need different FPGA development tools -- and because most FPGA vendor tools are huge (many GiB for each Xilinx tool set) -- I ran out of storage space on my Linux partition pretty frequently.

While -- like most small thin laptops -- the Dell XPS 9360 CPU, RAM, etc is not upgradable, as they are directly soldered onto the motherboard, it turns out that the storage is a regular M.2 2280 NVMe drive, and can be swapped out for another M.2 NVMe drive. So I decided to replace the M.2 NVMe drive in my Dell XPS 9360 with a larger one to give the laptop a longer lease of life. (I would like to have more RAM, but in practice for what I do 8 GiB of RAM is sufficient even if it is not exactly ample RAM. So I can probably live with 8 GiB of RAM indefinitely. And the i7 CPU is still relatively fast for what I do with the laptop.)

After looking around for a while I settled on a 1TB Samsung 970 EVO Plus because:

  • It was available in stock from my local retailer, for a reasonable price

  • 1TB was a large increase in storage space, which would reduce the impact of solid state drive overwrite limits

  • It reviewed pretty well: AnandTech, TomsHardware, PCWorld, StorageReview, Guru 3D, etc.

  • While it is a TLC drive (3-bits per cell), it has both a TurboWrite feature (initial write to SLC (2-bits per cell) section) and a RAM cache to reduce the speed impact. This means it is well tuned to the sort of bursty write you get on a laptop (and not well tuned for sustained "enterprise" writes). Also the existing Toshiba drive supplied with the Dell XPS 9360 was also a TLC drive, and performed good enough for my laptop use.

  • People had reported putting other Samsung 9xx EVO drives into Dell XPS 93x0 models (eg, on Reddit, iFixIt, and the Dell forums, etc).

  • Samsung make their own storage chips, and have a pretty reasonable reputation (and so the 5 year warranty offered is likely to actually indicate the quality).

Of note, while the Samsung 970 EVO Plus is capable of up to about 3 GB/s transfer speeds via M.2, the Dell XPS 9360 runs the M.2 interface in a power saving mode, which limits the maximum transfer speed to about 1.8 GB/s. Which means that one could choose to buy a slower drive and still get the same performance -- but I chose to pay a little bit more now for hopefully a more reliably drive, that could potentially be moved to another system later.

Swapping the drive

Physically swapping the M.2 drive requires disassembling the laptop, but I found lots of guides to replacing the M.2 drive, so I was fairly confident that I could physically replace the M.2 drive itself. The major challenges was going to be getting the data from one drive to the other, because the Dell XPS 9360 has only one M.2 slot and M.2 external adapters are not common, so copying directly from the old drive to the new drive was not possible.

In terms of physically replacing the drive I suggest you look at some of the other guides. My only extra hints would be:

  • I used a Torx T5 bit to remove most of the screws, and a Philips #00 to remove the one under the bottom flap and on the M.2 drive itself.

  • There are a lot of plastic clips around all sides of the rear of the laptop, which need to be unclipped with a spudger or similar (I used a plastic spudger that came in a "phone repair" kit, which worked but was not ideal).

  • The clips at the front of the laptop and the sides are smaller, and should be unclipped first.

  • There are large clips across the rear in the hinge area, so the best option is to unclip all the others, and then lift the bottom forwards (away from the hinge area towards the front of the laptop) to unclip those clips (and remember to reinstall the base in the reverse manner: large clips by the hinge first, then around the sides).

  • Make certain you have everything copied off the old drive before opening the laptop to swap out the M.2 drive, as you will want to avoid having to open the laptop more than once.

Other than getting the bottom of the case off (possible, but fiddly and time consuming), swapping the M.2 drive physically is fairly easy, and anyone used to working with PC internals would be able to swap the drive. (Lots of extra care is required in opening the case, though, due to all the plastic clips; fortunately there is no glue.)

Transferring data

Because the hardware limitations prevented a direct copy between the drives my approach was:

  • Make a backup of everything on the laptop (copying everything onto my NAS, both the Windows 10 and Ubuntu Linux partitions, and all the other partitions)

  • Boot into Windows 10 and create both a Recovery Boot USB stick (small, about 1GiB), and a Recovery Install USB stick (about 16GiB) just in case. (I needed the Recovery Boot USB stick to get Windows 10 booting again, so do not skip that one; fortunately I did not need to use the Recovery Install USB stick again.)

  • Boot off a Ubuntu 18.04 Live USB stick, and then use dd to copy each individual partition into its own file on an external hard drive. (To boot from the USB stick, the easiest option is to plug in the USB stick, and then press F12 repeatedly when the Dell logo is displayed after power on until it says "Preparing One Time Boot Menu"; note that on the Dell XPS 9360 the Fn key should not be pressed, as unlike a Mac laptop, those keys are function keys by default, and Fn is needed for the other features.)

  • Use md5sum to verify that the original drive partition contents and the dd copies were bit for bit identical.

  • Make multiple printouts of the partition table of the old drive (in various units), using parted list, so I could recreate it again on the new drive (by hand).

Then once I was happy that I had a fully copy of the old drive, I powered off the laptop, opened it up (as above), and installed the new Samsung 970 EVO Plus drive.

Once the laptop was back together I:

  • Plugged the Ubuntu 18.04 Live USB stick back in again, and used F12 to bring up the One Time Boot Menu, and booted back into the Live CD.

  • Manually partitioned the new M.2 drive with a gpt partition table, with the partitions the same size as the previous drive, and with the same flags, etc.

  • Used dd to copy the partition contents off the external drive onto the new Samsung drive.

  • Used md5sum to verify that those copies on the Samsung drive partitions were bit for bit identical to what had been copied off the old Toshiba drives.

And then I rebooted, and it failed to boot at all :-(

As best I can tell, even though I maintained the partition contents identically (and the first time, the partition locations and flags identically), the UEFI booting in both Ubuntu Linux 16.04 LTS and in Windows 10 (neither would boot), was relying at least in part on something else -- the Partition UUID which is part of the gpt format, perhaps -- which changed, and that threw off the booting process. (Plus changing the drive clearly caused the UEFI BIOS to forget the boot order sequence it previously had.)

Getting Ubuntu Linux 16.04 LTS to boot again

To fix the booting of Ubuntu Linux 16.04 LTS (via grub) I:

  • Booted off the Ubuntu Linux 18.04 live CD again

  • Mounted the Ubuntu Linux 16.04 root file system (which is in a LVM volume group, inside a LUKS encrypted container) with:

    sudo cryptsetup luksOpen /dev/nvme0n1p4 dell
    sudo pvscan
    sudo mkdir /install
    sudo mount /dev/vg/root /install

    which requires the password for the LUKS volume at the luksOpen stage.

  • Mounted /proc, /sys, dev, etc inside that:

    sudo mount -t sysfs sys /install/sys
    sudo mount -t proc proc /install/proc
    sudo mount -t devtmpfs udev /install/dev
    sudo mount -t devpts devpts /install/dev/pts
  • Changed into the chroot, and used that to mount the remaining volumes:

    sudo chroot /install
    mount -a

    and checked that /boot and /boot/efi had mounted:

    df -Pm | grep /boot
  • Then updated/reinstalled grub:


    from inside the chroot.

  • Then exited the chroot, and unmounted everything:

    sudo umount /install/dev/pts
    sudo umount /install/dev
    sudo umount /install/proc
    sudo umount /install/sys
    sudo umount /install/boot/efi
    sudo umount /install/boot
    sudo umount /install

And then I rebooted again. This time, to my relief, the laptop booted into grub normally, and then booted into Ubuntu Linux 16.04 LTS normally.

Unfortunately it still could not boot Windows 10 :-( It just kept coming up with the "Recovery" screen ("Your PC Device needs to be repaired"). Even after running sudo update-grub again from within the working Ubuntu Linux 16.04 LTS environment, to ensure that Windows 10 was found in the boot environment. Since I knew the file system contents was bit for bit identical, I figured the boot process had become confused (possibly due to the new partition UUIDs).

Getting Windows 10 to boot again

I first tried booting off the Windows 10 Boot Recovery (1GiB) USB stick I had made (using F12 to get a One Time Boot Menu to boot the USB stick), then navigating into Advanced Options / Startup Repair / Windows 10 but that just reported "Startup Repair couldn't repair your PC". So clearly Windows 10 was very confused.

Next I found a Dell guide to Repairing the Windows EFI bootloader which I tried, by going into Advanced Options / Command Prompt from the Windows 10 Boot Recovery USB stick. Unfortunately I got stuck at step 7 of that guide, because ESP (EFI boot partition) was hidden for some reason, which meant those instructions would not allow me to assign a drive letter to reinstall the Windows 10 boot config. (My guess is the same issue caused the "Startup Repair" to fail.) I do not know why it was shown up as Hidden, without a drive letter, as it did not have the hidden flag in the gpt partition table (and I could be certain it was the ESP partition by the exact size).

Fortunately I found another way to assign a drive letter to the ESP partition:

list disk
sel disk 0
list partition
select partition 1

which let me move on.

Unfortunately, the next step bootrec /FixBoot then failed with "Access is Denied". Some guides recommend reformatting the ESP partition at this point but I was reluctant to do that because there were both Ubuntu Linux 16.04 UEFI boot files on there and Dell UEFI boot files on there (eg, for recovery tools), so I kept looking.

Another guide suggested, bootrec /REBUILDBCD which I tried next, but after scanning the system that then reported "The system cannot find the path specified." :-(

With some more hunting online, I found someone who had encountered and fixed that issue, by doing:

cd /d H:\EFI\Microsoft\Boot
ren BCD BCD.bak
bcdboot c:\Windows /l en-nz /s h: /f ALL

where c:\Windows is the Windows 10 directory on the drive letter found as the main Windows 10 Drive, en-nz is the preferred local (en-us seems likely to be the default), /s h: specifies the drive letter assigned to the EFI partition, and /f ALL specifies that the UEFI, BIOS, and NVRAM boot settings should all be updated. For good measure I also tried:

bootrec /fixboot

again, but that still failed ("Access is Denied").

However after exiting out the recovery shell and rebooting the laptop, it actually automatically booted into the Windows 10 environment using the Windows boot manager. At this point it only booted Windows 10, but I was able to get into the boot manager (eg, F12), ask it to boot Ubuntu Linux 16.04 LTS, and then do:

sudo update-grub
sudo grub-install

inside a Ubuntu Linux 16.04 LTS terminal window, and then the laptop was booting normally, with grub able to boot both Ubuntu Linux 16.04 LTS and Windows 10 as it did on the old drive. Phew.

Expanding the Linux root partition

Once everything was copied onto the new Samsung 970 EVO Plus 1TB drive, and booting successfully, that just left the original purpose: expanding the Linux file system. (Because I hardly use Windows 10 on the laptop, and had not run into space problems on that partition -- about 90 GiB -- I chose to dedicate all the extra space on the new drive to Linux.)

My original plan was just to create an additional partition on the end of the drive (with the remaining 700 GiB of space), and then use LVM to join the two partitions together, given that the root file system was already on LVM -- and that was how I laid out the drive when I first copied the data over. However I realised that with the Linux filesystem inside a LUKS encrypted partition, that would both be more fragile (two LUKS containers, or some data in a LUKS container and the rest outside it), and potentially require entering two passwords on boot (to unlock each partitions).

So I spent a while shuffling partitions around so that all the ESP / Windows / Dell partitions were at the start of the drive, followed by the Ubuntu /boot partition (which needs to be outside the LUKS encryption unless you do special EFI boot tricks), followed by the main LUKS / LVM partition at the end of the drive. (It was fiddly to shuffle things around, but fortunately when you have a drive that is four times as big as the original content it is easy to make more partitions to temporarily hold copies of the data you want to move: so it was just more use of dd and md5sum to make sure everything was copied correctly, into the right places. I even had to delete some of the partitions, quit parted, and then recreate the partitions at the identical spot and size to get them to show up in the right partition order that I wanted.)

Once all the partitions were in the right order, and the right size, and the laptop was booting Ubuntu Linux 16.04 LTS and Windows 10 correctly, I was ready to carry on with expanding the Linux root drive. My Linux root drive is:

  • In an ext4 file system (Ubuntu 16.04 LTS default)

  • On a LVM logical volume (LV)

  • Inside a LVM volume group (VG)

  • Inside a LVM physical volume (PV)

  • Inside a LUKS encrypted container

  • Inside a partition on /dev/nvme0n1 (/dev/nvme0n1p7 by the end of all the partition shuffling).

Which means in order to expand the root file system, all of those layers need to be expanded, in the opposite order. That's a lot of layers to potentially go wrong.

Since I still had a couple of recent backups of the laptop drive, as well as the original M.2 drive, which I had recently tested, and I knew how to recover from booting issues, I felt it was worth giving these steps ago. (Seriously have known tested good backups before trying to do this: there is a lot to go wrong, and typos or interrupted operations could wreak havoc.)

(Several of these instructions suggest creating a temporary partition after the one you are expanding, and writing random data to that partition before expanding the LUKS volume into it: I did not do that because (a) it takes a bunch of time to do, (b) it forces the SSD to assume the entire disk is in use, and copy more data around thus using up SSD drive life due to write amplification, (c) the encrypted partition is already large enough for my level of paranoia about this particular volume being recovered, and (d) the data on this laptop is not that sensitive -- it's mostly just a FPGA development laptop at this point, and almost all of that development is open source on GitHub anyway, so I do not feel the need to do that much to protect it: it is just encrypted because that is what I do with all my computers that I might travel with, to make data recovery by someone else non-trivial.)

Expanding the partition

To expand the partition I booted off the Ubuntu 18.04 Live USB install, and then used parted to do:

resizepart 7 953869MiB

where 953869MiB was 1MiB lower than the maximum size of the drive displayed by parted at the top of the partition table list result (the exact size does not work, I think due to rounding and/or the partition elements starting at 0).

Then I rebooted back into Ubuntu 18.04 LTS Live CD to expand the LUKS container, while it was open but not mounted.

Expanding the LUKS container

Finding some guides to enlarging a LVM on LUKS install was what convinced me that rearranging the disk partitions to have a single LUKS container was the best option. (Previously I had assumed expanding the LUKS container was not possible, even though I knew all the other steps were possible.)

Once the partition is expanded, and you have rebooted back into the Ubuntu Linux Live environment, to ensure that Linux consistently sees the new partition as expanded but not mounted, then open the LUKS container and expand it to the new size of the partition (note: new partition number as I rearranged the partitions, above, to have the LUKS / LVM partition at the end):

sudo cryptsetup luksOpen /dev/nvme0n1p7 dell
ls -l /dev/mapper/dell
sudo vgscan
sudo vgchange -ay
sudo cryptsetup resize dell

That command completes pretty much immediately, and the default new size is "the size of the disk partition" which is exactly what we want here.

Verify the new size of the LUKS container with:

sudo cryptsetup -v status dell

which reports the size in "512 byte sectors" (one of the most useless units for modern drives :-( ); but fortunately dividing by 2048 (2 * 1024) gives us MiB, and we can verify the new size is very close to the partition size (in my case 2 MiB smaller; it also revealed the LUKS container was injecting a 4096 sector offset, which is exactly 2 MiB: 4096 * 512 = 2097152 = 2 * 1024 * 1024; I am not sure if that is a requirement, a default, or something I chose when I first set it up).

Expanding the LVM physical volume (PV)

Expanding the LVM physical volume is just a matter of asking LVM to recognise the additional space:

sudo pvresize /dev/mapper/dell

and it should return almost immediately reporting "1 physical volume(s) resized / 0 physical volume(s) not resized)". We can verify the new physical volume size with:

sudo pvdisplay /dev/mapper/dell

and that should show a "PV Size" around 832 GiB, as well as lots of "Free PE" now the physical volume is much larger.

Expanding the LVM volume group (VG)

The volume group is automatically expanded when it has physical volumes with spare space in them, which we can verify with:

sudo vgdisplay

That should also show a "VG Size" around 832 GiB, as well as lots of "Free PE / Size".

Expanding the LVM logical volume (LV)

My existing install had two logical volumes, created during the original Ubuntu Linux 16.04 LTS:

  • /dev/vg/root

  • /dev/vg/swap

and unfortunately they were in that order on the disk, as shown by:

sudo lvdisplay

Since I preferred to have my root LV contiguous, I chose to remove the swap logical volume, then expand the root logical volume, then create a new swap logical volume and initialise that again. (Because we are booted into an Ubuntu Live USB environment, none of these are mounted, and the swap usage is obviously transitory anyway, so the contents did not need to be retained.)

To do this I did:

sudo lvchange -an /dev/vg/swap
sudo lvremove /dev/vg/swap

which prompts for confirmation of removing the swap logical volume.

Then I expanded the root logical volume to 512 GiB (chosen to not completely use up the extra disk space to allow more flexibility, but to be about 4 times as big as the existing Linux root filesystem):

sudo lvresize -L 512G /dev/vg/root

and verified that worked as expected with:

sudo lvdisplay /dev/vg/root

which should show a "LV Size" of 512.00 GiB as a result.

Then I made a new swap logical volume, of 2 GiB again:

sudo lvcreate -n swap -L 2G /dev/vg
sudo lvdisplay /dev/vg/swap

and reinitialised the swap space:

sudo mkswp -L swap /dev/vg/swap

(Note that this process changes the UUID of the swap partition, which might need to be fixed up, if you are mounting the swap by UUID rather than LV path or volume label.)

Resizing the ext4 root file system

Now that everything below is resized, we can resize the ext4 filesystem. With ext4 this could actually be done online (while mounted), but because I was still booted into the Ubuntu Linux live environment, I did the resize offline, starting by checking the file system:

sudo e2fsck -f /dev/vg/root
sudo resize2fs -p /dev/vg/root

where the -p is for progress messages, but in practice the resize only took a few seconds on the Samsung 970 EVO Plus drive (as it only moves metadata around). The new file system size is reported in 4KiB blocks (another not very useful unit :-( ), as 134217728 4KiB blocks, which we can check is correct with 134217728 / 4 * 1024 * 1024 = 512 GiB.

After that I did another e2fsck -f /dev/vg/root check out of abundance of precaution, and then mounted the partition to check the expected contents were there (and verify the way the swap partition was mounted to reduce boot issues):

sudo mkdir /install
sudo mount /dev/vg/root /install
grep swap /install/etc/install

Fortunately the swap was being mounted by LV path (/dev/mapper/vg-swap) so it should survive being recreated elsewhere on the disk.

While it was mounted, I also checked the /dev/vg/root filesystem usage, to make sure I now had lots of free space:

sudo df -Pm /install

and that showed I had gone from about 98% used on the root partition to about 27% used. So I unmounted the file system again:

sudo umount /install


sudo vgdisplay

showed I had a bit over 300GiB of unallocated space in the volume group saved for later. (And if I do want to expand the root logical volume I would probably remove the swap again, and then recreate it, to keep the root logical volume in one LVM extent for simplicity.)


Once all the expansion steps were done from the Ubuntu Linux live environment, I simply rebooted, and Ubuntu 16.04 LTS and Windows 10 booted fine -- and I had lots more space in my Linux environment.

With a couple of days of effort, and a few hundred dollars for a new M.2 drive, I have managed to change my Dell XPS 9360 laptop from a persistently almost full root file system, to one which is about 27% full (and has about 300 GiB still available to allocate later). That makes it much more useful, potentially for several more years.

About a day of that time was consumed by:

  • Making backups

  • Copying the file system partitions around (especially to/from a USB 3 spinning disk)

  • Checking those backups, and copies

  • Waiting for Windows 10 to create USB Recovery drives (the 16GiB system install recovery drive took over 2.5 hours to write!)

and the remainder was research, getting Ubuntu Linux 16.04 LTS and Windows 10 booting again, etc. (Actually physically swapping the M.2 drive inside the Dell XPS 9360 took maybe half an hour including all the disassembly and reassembly.) I expect if I did it again the process would be faster, as I could have avoided some of the file system rearrangement I did by going directly to the final partition layout, knowing that I was going to have to make everything bootable again anyway.

Of note:

  • One potential advantage of not using the whole SSD, is that writes to the drive will be constrained to about the first 60% of the drive, which should reduce the amount of data that the SSD firmware feels it needs to shuffle around (particularly important because by default LUKS does not pass on TRIM/DISCARD requests for security reasons, so any block written to will then be copied around by the SSD firmware forever).

  • It turned out almost impossible to find the SSD erase block size, and align anything to those erase blocks (cf XFS Storage Layout Considerations). As best I could tell erase block sizes seem to be trade secrets now, certainly not appearing in data sheets and in some cases not even available by requests; and everything seems to be defaulting to aligning to 1 MiB blocks as being sufficient. Aligning to 1 MiB seems likely to be reasonable (especially after 5+ years of OSes doing that automatically, and vendors designing for those OS), but possibly not the most optimal choice in theory. So for single drive systems it probably makes sense just to let everything auto-align to 1 MiB boundaries, and ignore the problem.

Even modern local (internal) storage is basically a "network attached storage server" of its own, with its own ideas about how to store data, and its own storage API. It just happens to speak SATA or SAS or NVMe or similar, rather than Ethernet and TCP/IP.

ETA 2019-04-30: After doing all of this, when I next booted into Windows 10 for an extended period, I found that Windows Update was going to install (no option) the update:

Dell, Inc - Firmware - 9/27/2018 12:00:00 AM - 2.10.0
Status: Pending Install

(with the lovely message "We'll automatically install updates when you aren't using your device, or you can install them now if you want.").

It was not clear to me what it is. By searching on the Microsoft Catalog for 2.10.0 I could find three versions, with the matching date (and three 2.10.0 versions with a later update date of 3/25/2019); I think the three versions are for different versions of Windows 10, and my guess is this version is the one that would be installed on my Dell XPS 9360, since I think I have already updated to the latest Windows 10 release. Unfortunately there were no other details for what it is. And it was unclear if the install is being prompted by replacing the drive, or just by the date.

Since Windows 10 was not giving me an option, and I hate having things break randomly in the background, I chose to plug the laptop into the mains power, and let it "Install Now". Naturally it wanted to restart, and when it restarted it then proceeded to update the BIOS and firmware of everything in the laptop :-(

I am unclear whether this user hostile behaviour of forcing compulsory unscheduled firmware updates is Dell behaviour or new Microsoft behaviour (or both), why it was forced to a September 2018 version, and whether it was triggered by replacing the Toshiba M.2 drive that came with the laptop with the Samsung 970 EVO Plus drive -- or just triggered by the date / first sufficiently long Windows boot that something decided it had time to treat my laptop as its own.

Hopefully this unplanned firmware update does not adversely impact is on the Dell XPS 9360 that I had just spent a couple of days upgrading the drive :-( Fortunately Windows 10 and Ubuntu Linux 16.04 LTS do seem to boot up again.

(After rebooting into Windows again, Dell Update -- not Windows Update -- decided it had a further 12 updates it wanted to install, including a BIOS released 2019-04-22; but fortunately I could choose "Remind Later" to those, so that is what I did.)

Posted Tue Apr 30 14:25:13 2019 Tags:

About 18 months ago I wondered why the XFS FAQ recommended a stripe width half the number of disks for RAID-10, as the underlying rationale did not seem to be properly explained anywhere (see XFS FAQ on sunit, swidth values). The answer turned out to be because the RAID-0 portion of RAID-10 dominated the layout choices.

I suggested extending the FAQ to provide some rationale, but Dave Chiner (the main Linux XFS maintainer) said "The FAQ is not the place to explain how the filesystem optimises allocation for different types of storage, and pointed at a section of the XFS admin doc on alignment to storage geometry, which at the time -- and now, 18 months later -- reads:

==== Alignment to storage geometry

TODO: This is extremely complex and requires an entire chapter to itself.

which is... rather sparse. Because Dave had not had time to write that "chapter to itself".

At the time I offered to write a "sysadmin's view" of the relevant considerations, which got delayed by actual work, but still would be greatly appreciated.

I eventually posted what I had written to the XFS mailing list in February 2018, where it seems to have been lost in the noise and ignored.

Since it is now nearly a year later, and nothing seems to have happened with the documentation I wrote -- and the mailing list location is not very searchable either -- I have decided to repost it here on my blog as a (slightly) more permanent home. It appears unlikely to be incorporated into the XFS documentation.

So below is that original, year old, documentation draft. The advice below is unreviewed by the XFS maintainers (or anybody, AFAICT), and is just converted from the Linux kernel documentation RST format to Markdown (for my blog). Conversion done with pandoc and a bunch of manual editing, for all the things pandoc missed, or was confused by (headings, lists, command line examples, etc).

I would suggest double checking anything below against other sources before relying on it. If there is no other documentation to check, perhaps ask on the XFS Mailing List instead.

Alignment to storage geometry

XFS can be used on a wide variety of storage technology (spinning magnetic disks, SSDs), on single disks or spanned across multiple disks (with software or hardware RAID). Potentially there are multiple layers of abstraction between the physical storage medium and the file system (XFS), including software layers like LVM, and potentially flash translation layers or hierachical) storage management.

Each of these technology choices has its own requirements for best alignment, and/or its own trade offs between latency and performance, and the combination of multiple layers may introduce additional alignment or layout constraints.

The goal of file system alignment to the storage geometry is to:

  • maximise throughput (eg, through locality or parallelism)

  • minimise latency (at least for common activities)

  • minimise storage overhead (such as write amplification due to read-modify-write -- RMW -- cycles).

Physical Storage Technology

Modern storage technology divides into two broad categories:

  • magnetic storage on spinning media (eg, HDD)

  • flash storage (eg, SSD or NVMe)

These two storage technology families have distinct features that influence the optimal file system layout.

Magnetic Storage: accessing magnetic storage requires moving a physical read/write head across the magnetic media, which takes a non-trivial amount of time (ms). The seek time required to move the head to the correct location is approximately linearly proportional to the distance the head needs to move, which means two locations near each other are faster to access than two locations far away. Performance can be improved by locating data regularly accessed together "near" each other. (See also Wikipeida Overview of HDD performance characteristics.)

4KiB physical sectors HDD: Most larger modern magnetic HDDs (many 2TiB+, almost all 4TiB+) use 4KiB physical sectors to help minimise storage overhead (of sector headers/footers and inter-sector gaps), and thus maximise storage density. But for backwards compatibility they continue to present the illusion of 512 byte logical sectors. Alignment of file system data structures and user data blocks to the start of (4KiB) physical sectors avoids unnecessarily spanning a read or write across two physical sectors, and thus avoids write amplification.

Flash Storage: Flash storage has both a page size (smallest unit that can be written at once), and an erase block size (smallest unit that can be erased) which is typically much larger (eg, 128KiB). A key limitation of flash storage is that only one value can be written to it on an individual bit/byte level. This means that updates to physical flash storage usually involve an erase cycle to "blank the slate" with a single common value, followed by writing the bits that should have the other value (and writing back the unmodified data -- a read-modify-write cycle). To further complicate matters, most flash storage physical media has a limitation on how many times a given physical storage cell can be erased, depending on the technology used (typically in the order of 10000 times).

To compensate for these technological limitations, all flash storage suitable for use with XFS uses a Flash Translation Layer within the device, which provides both wear levelling and relocation of individual pages to different erase blocks as they are updated (to minimise the amount that needs to be updated with each write, and reduce the frequency blocks are erased). These are often implemented on-device as a type of log structured file system, hidden within the device.

For a file system like XFS, a key consideration is to avoid spanning data structures across erase blocks boundaries, as that would mean that multiple erase blocks would need updating for a single change. Write amplification within the SSD may still result in multiple updates to physical media for a single update, but this can be reduced by advising the flash storage of blocks that do not need to be preserved (eg, with the discard mount option, or by using fstrim) so it stops copying those blocks around.


RAID provides a way to combine multiple storage devices into one larger logical storage device, with better performance or more redundancy (and sometimes both, eg, RAID-10). There are multiple RAID array arrangements ("levels") with different performance considerations. RAID can be implemented both directly in the Linux kernel ("software RAID", eg the "MD" subsystem), or within a dedicated controller card ("hardware RAID"). The filesystem layout considerations are similar for both, but where the "MD" subsystem is used modern user space tools can often automatically determine key RAID parameters and use those to tune the layout of higher layers; for hardware RAID these key values typically need to be manually determined and provided to user space tools by hand.

RAID 0 stripes data across two or more storage devices, with the aim of increasing performance, but provides no redundancy (in fact the data is more at risk as failure of any disk probably renders the data inaccessible). For XFS storage layout the key consideration is to maximise parallel access to all the underlying storage devices by avoiding "hot spots" that are reliant on a single underlying device.

RAID 1 duplicates data (identically) across two more more storage devices, with the aim of increasing redundancy. It may provide a small read performance boost if data can be read from multiple disks at once, but provides no write performance boost (data needs to be written to all disks). There are no special XFS storage layout considerations for RAID 1, as every disk has the same data.

RAID 5 organises data into stripes across three or more storage devices, where N-1 storage devices contain file system data, and the remaining storage device contains parity information which allows recalculation of the contents of any one other storage device (eg in the event that storage device fails). To avoid the "parity" block being a hot spot, its location is rotated amongst all the member storage devices (unlike RAID 4 which had a parity hot spot). Writes to RAID-5 require reading multiple elements of the RAID 5 parity block set (to be able to recalculate the parity values), and writing at least the modified data block and parity block. The performance of RAID 5 is improved by having a high hit rate on caching (thus avoiding the read part of the read-modify-write cycle), but there is still an inevitable write overhead.

For XFS storage layout on RAID 5 the key considerations are the read-modify-write cycle to update the parity blocks (and avoiding needing to unnecessarily modify multiple parity blocks), as well as increasing parallelism by avoiding hot spots on a single underlying storage device. For this XFS needs to know both the stripe size on an underlying disk, and how many of those stripes can be stored before it cycles back to the same underlying disk (N-1).

RAID 6 is an extension of the RAID 5 idea, which uses two parity blocks per set, so N-2 storage devices contain file system data and the remaining two storage device contain parity information. This increases the overhead of writes, for the benefit of being able to recover information if more than one storage device fails at the same time (including, eg, during the recovery from the first storage device failing -- a not unknown even with larger storage devices and thus longer RAID parity rebuild recovery times).

For XFS storage layout on RAID 6, the considerations are the same as RAID 5, but only N-2 disks contain user data.

RAID 10 is a conceptual combination of RAID 1 and RAID 0, across at least four underlying storage devices. It provides both storage redundancy (like RAID 1) and interleaving for performance (like RAID 0). The write performance (particularly for smaller writes) is usually better than RAID 5/6, at the cost of less usable storage space. For XFS storage layout the RAID-0 performance considerations apply -- spread the work across the underlying storage devices to maximise parallelism.

A further layout consideration with RAID is that RAID arrays typically need to store some metadata with each RAID array that helps it locate the underlying storage devices. This metadata may be stored at the start or end of the RAID member devices. If it is stored at the start of the member devices, then this may introduce alignment considerations. For instance the Linux "MD" subsystem has multiple metadata formats, and formats 0.9/1.0 store the metadata at the end of the RAID member devices and formats 1.1/1.2 store the metadata at the beginning of the RAID member devices. Modern user space tools will typically try to ensure user data starts on a 1MiB boundary ("Data Offset").

Hardware RAID controllers may use either of these techniques too, and may require manual determination of the relevant offsets from documentation or vendor tools.

Disk partitioning

Disk partitioning impacts on file system alignment to the underlying storage blocks in two ways:

  • the starting sectors of each partition need to be aligned to the underlying storage blocks for best performance. With modern Linux user space tools this will typically happen automatically, but older Linux and other tools often would attempt to align to historically relevant boundaries (eg, 63-sector tracks) that are not only irrelevant to modern storage technology but due to the odd number (63) result in misalignment to the underlying storage blocks (eg, 4KiB sector HDD, 128KiB erase block SSD, or RAID array stripes).

  • the partitioning system may require storing metadata about the partition locations between partitions (eg, MBR logical partitions), which may throw off the alignment of the start of the partition from the optimal location. Use of GPT partitioning is recommended for modern systems to avoid this, or if MBR partitioning is used either use only the 4 primary partitions or take extra care when adding logical partitions.

Modern Linux user space tools will typically attempt to align on 1MiB boundaries to maximise the chance of achieving a good alignment; beware if using older tools, or storage media partitioned with older tools.

Storage Virtualisation and Encryption

Storage virtualisation such as the Linux kernel LVM (Logical Volume Manager) introduce another layer of abstraction between the storage device and the file system. These layers may also need to store their own metadata, which may affect alignment with the underlying storage sectors or erase blocks.

LVM needs to store metadata the physical volumes (PV) -- typically 192KiB at the start of the physical volume (check the "1st PE" value with pvs -o name,pe_start). This holds both physical volume information as well as volume group (VG) and logical volume (LV) information. The size of this metadata can be adjusted at pvcreate time to help improve alignment of the user data with the underlying storage.

Encrypted volumes (such as LUKS) also need to store their own metadata at the start of the volume. The size of this metadata depends on the key size used for encryption. Typical sizes are 1MiB (256-bit key) or 2MiB (512-bit key), stored at the start of the underlying volume. These headers may also cause alignment issues with the underlying storage, although probably only in the case of wider RAID 5/6/10 sets. The --align-payload argument to cryptsetup may be used to influence the data alignment of the user data in the encrypted volume (it takes a value in 512 byte logical sectors), or a detached header (--header DEVICE) may be used to store the header somewhere other than the start of the underlying device.

Determining su/sw values

Assuming every layer in your storage stack is properly aligned with the underlying layers, the remaining step is to give mkfs.xfs appropriate values to guide the XFS layout across the underlying storage to minimise latency and hot spots and maximise performance. In some simple cases (eg, modern Linux software RAID) mkfs.xfs can automatically determine these values; in other cases they may need to be manually calculated and supplied.

The key values to control layout are:

  • su: stripe unit size, in bytes (use m or g suffixes for MiB or GiB) that is updatable on a single underlying device (eg, RAID set member)

  • sw: stripe width, in member elements storing user data before you wrap around to the first storage device again (ie, excluding parity disks, spares, etc); this is used to distribute data/metadata (and thus work) between multiple members of the underlying storage to reduce hot spots and increase parallelism.

When multiple layers of storage technology are involved, you want to ensure that each higher layer has a block size that is the same as the underlying layer, or an even multiple of the underlying layer, and then give that largest multiple to mkfs.xfs.

Formulas for calculating appropriate values for various storage technology:

  • HDD: alignment to physical sector size (512 bytes or 4KiB). This will happen automatically due to XFS defaulting to 4KiB block sizes.

  • Flash Storage: alignment to erase blocks (eg, 128 KiB). If you have a single flash storage device, specify su=ERASE_BLOCK_SIZE and sw=1.

  • RAID 0: Set su=RAID_CHUNK_SIZE and sw=NUMBER_OF_ACTIVE_DISKS, to spread the work as evenly as possible across all member disks.

  • RAID 1: No special values required; use the values required from the underlying storage.

  • RAID 5: Set su=RAID_CHUNK_SIZE and sw=(NUMBER_OF_ACTIVE_DISKS-1), as one disk is used for parity so the wrap around to the first disk happens one disk earlier than the full RAID set width.

  • RAID 6: Set su=RAID_CHUNK_SIZE and sw=(NUMBER_OF_ACTIVE_DISKS-2), as two disks are used for parity so the wrap around to the first disk happens two disks earlier than the full RAID set width.

  • RAID-10: The RAID 0 portion of RAID-10 dominates alignment considerations. The RAID 1 redundancy reduces the effective number of active disks, eg 2-way mirroring halves the effective number of active disks, and 3-way mirroring reduces it to one third. Calculate the number of effective active disks, and then use the RAID 0 values. Eg, for 2-way RAID 10 mirroring, use su=RAID_CHUNK_SIZE and sw=(NUMBER_OF_MEMBER_DISKS / 2).

  • RAID-50/RAID-60: These are logical combinations of RAID 5 and RAID 0, or RAID 6 and RAID 0 respectively. Both the RAID 5/6 and the RAID 0 performance characteristics matter. Calculate the number of disks holding parity (2+ for RAID 50; 4+ for RAID 60) and subtract that from the number of disks in the RAID set to get the number of data disks. Then use su=RAID_CHUNK_SIZE and sw=NUMBER_OF_DATA_DISKS.

For the purpose of calculating these values in a RAID set only the active storage devices in the RAID set should be included; spares, even dedicated spares, are outside the layout considerations.

A note on sunit/swidth versus su/sw

Alignment values historically were specified in sunit/swidth values, which provided numbers in 512-byte sectors, where as swidth was some multiple of sunit. These units were historically useful when all storage technology used 512-byte logical and physical sectors, and often reported by underlying layers in physical sectors. However they are increasingly difficult to work with for modern storage technology with its variety of physical sector and block sizes.

The su/sw values, introduced later, provide a value in bytes (su) and a number of occurrences (sw), which are easier to work with when calculating values for a variety of physical sector and block sizes.


  • sunit = su / 512
  • swidth = sunit * sw

With the result that swidth = (su / 512) * sw.

Use of sunit / swidth is discouraged, and use of su / sw is encouraged to avoid confusion.

WARNING: beware that while the sunit/swidth values are specified to mkfs.xfs in 512-byte sectors, they are reported by +mkfs.xfs (and xfs_info) in file system blocks (typically 4KiB, shown in the bsize value). This can be very confusing, and is another reason to prefer to specify values with su / sw and ignore the sunit / swidth options to mkfs.xfs.

Posted Tue Jan 8 14:11:07 2019 Tags: