Recently I ordered a Synology DS216+ II Linux based NAS with two 6GB WD60EFRX (WD Red NAS) drives, as an "end of (business) year" special. I had been considering buying a NAS for a while as I have lots of data collected over years from many different computers scattered over lots of drives (including several copies of that data), and having a definitive central copy of that data would make things a lot easier. My other hope is to finally get rid of the attached external drive by my main workstation (which has been full for a while anyway), as that is the loudest thing near my work area (at least when it spins up; and the drive spin up causes annoying disk IO pauses even on things that should in theory just need the internal SSD).

I went with Synology because I have friends who have used them for years, and know that I can get a ssh connection into them to check things. In addition the data recovery options for getting data off the disks elsewhere are pretty good -- it is Linux mdadm and lvm under the hood. The DS216+ II happened to be one on sale, and the bundle turned out to be not that much more expensive (on sale) than buying a DS216j and the drives separately -- so the better RAM and CPU specifications seemed worth the small extra cost, and hot swapable drives is also a useful addition (the DS216j requires opening the case with a screw driver).

The single Gigabit Ethernet of both models was not a major limitation for me, as my use case is basically "single user", and each of the client machines also has only Gigabit Ethernet (or less); it is very rare I'm using more than one of those client machines at a time. (Besides the 100MB/s maximum of a single Gigabit Ethernet is still faster than the USB2 speed of older drive attachments, around 48 MB/s due to 480 Mbps -- and, eg, the external drive on my main desktop is USB2 attached due to that being what is available on the Apple Thunderbolt Cinema Display monitor I have.) The 6GB WD Red NAS drives were basically chosen based on price/capacity being reasonable, and expecting to only use 3-4GB in the immediate future. (Only WD Red NAS drives were available in the bundle, but I would probably have chosen them anyway.)

Because the DS216+ was ordered as a bundle it arrived with the drives pre-installed, and a note attached to check that they were still properly inserted. It also appears to have been delivered with DSM (Disk Station Manger) pre-installed on the drives -- DSM 6.1-15047 to be precise -- which means that I did not have to go through some of the setup steps. But it also meant that it has been preinstalled with some defaults that I did not necessarily want -- so I chose to delete the disk volume and start again (given that they apparently cannot be shrunk, and I do want to leave space for more than one volume at this stage).

Out of the box, the DS216+ found an IP address with DHCP, and then was reachable on http://IP:5000/ and also on http://diskstation.local:5000/ -- the latter being found by Multicast DNS (mDNS)/Bonjour. They default username was admin, and it appears if you do not complete all the setup the default password is no password (ie, enter admin and then just press enter for the password).

My first setup step was to assign a static DHCP lease for the DS216+ MAC address, and a DNS name, so that I could more easily find it (nas01 in my local domain). The only way I could find to persuade the DS216+ to switch over to the new IP address was to force it to restart ("person" icon -> Restart).

Once that was done, it seemed worth updating to the latest DSM, which is currently 6.1-15047-2 that appears to just have some bug fixes for 6.1-15047. To do that in the DSM interface (http://nas01:5000/) go to the Control Panel -> Update & Restore, and it should tell you that a new DSM is available and offer to download it. Clicking on Download will download the software off the Internet, and when that finishes clicking on "Update Now" will install the update. After the "are you sure you want to do this now" prompt, warning you that it will reset the DS216+, the update will start and then the DS216+ will restart. It said it would take up to 10 minutes, but actually took about 2 minutes (presumably at least in part due to being a minor software update).

The other "attention needed" task was an update in the Package Center, which is Synology's "app store". It needed me to agree to the Package Center Terms of Service, and then I could see there was an update to the "File Station" application which I assume is in the default install. I also updated that at this point (by clicking on "Update", which seemed to do everything fairly transparently).

At this point it also seemed useful to create a user for myself, and set the "admin" password to something longer than an empty string. Both are done in the Control Panel -> User area. There are a lot of options in the new user creation (around volume access, and quotas), but I left them all at the default other than putting my user into the administrators group so that it could be used via ssh.

With the user/passwords set up, I could ssh into the DS216+ (since ssh seemed to be on by default):

ssh nas01

and look around at how things were set up out of the box.

The DS216+ has a Linux 3.10 kernel:

ewen@nas01:/$ uname -a
Linux nas01 3.10.102 #15047 SMP Thu Feb 23 02:23:28 CST 2017 x86_64 GNU/Linux synology_braswell_216+II
ewen@nas01:/$

with a dual core Intel N3060 CPU:

ewen@nas01:/$ grep "model name" /proc/cpuinfo
model name  : Intel(R) Celeron(R) CPU  N3060  @ 1.60GHz
model name  : Intel(R) Celeron(R) CPU  N3060  @ 1.60GHz
ewen@nas01:/$

The two physical hard drives appear as SATA ("SCSI") disks, along with what looks like a third internal disk:

ewen@nas01:/$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
  Vendor: WDC      Model: WD60EFRX-68L0BN1         Rev: 82.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi1 Channel: 00 Id: 00 Lun: 00
  Vendor: WDC      Model: WD60EFRX-68L0BN1         Rev: 82.0
  Type:   Direct-Access                    ANSI  SCSI revision: 05
Host: scsi3 Channel: 00 Id: 00 Lun: 00
  Vendor: Synology Model: DiskStation              Rev: PMAP
  Type:   Direct-Access                    ANSI  SCSI revision: 06
ewen@nas01:/$

On the first two disks there are three Linux MD RAID partitions:

ewen@nas01:/$ sudo fdisk -l /dev/sda
Disk /dev/sda: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: FB4736A9-5AAF-4D25-905D-97A8A8035FC2

Device       Start         End     Sectors  Size Type
/dev/sda1     2048     4982527     4980480  2.4G Linux RAID
/dev/sda2  4982528     9176831     4194304    2G Linux RAID
/dev/sda5  9453280 11720838239 11711384960  5.5T Linux RAID
ewen@nas01:/$

ewen@nas01:/$ sudo fdisk -l /dev/sdb
Disk /dev/sdb: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 1322083B-9F26-47B5-825A-56C09FAB9C39

Device       Start         End     Sectors  Size Type
/dev/sdb1     2048     4982527     4980480  2.4G Linux RAID
/dev/sdb2  4982528     9176831     4194304    2G Linux RAID
/dev/sdb5  9453280 11720838239 11711384960  5.5T Linux RAID
ewen@nas01:/$

which are then joined together into three Linux MD software RAID arrays, using RAID 1 (mirroring):

ewen@nas01:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sda5[0] sdb5[1]
      5855691456 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [2/2] [UU]

unused devices: <none>
ewen@nas01:/$

The first is used for the root file system:

ewen@nas01:/$ mount | grep md0
/dev/md0 on / type ext4 (rw,relatime,journal_checksum,barrier,data=ordered)
ewen@nas01:/$

The second is used as a swap volume:

ewen@nas01:/$ grep md1 /proc/swaps
/dev/md1                                partition   2097084 0   -1
ewen@nas01:/$

and the third is used for LVM:

ewen@nas01:/$ sudo pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1000
  PV Size               5.45 TiB / not usable 704.00 KiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              1429612
  Free PE               0
  Allocated PE          1429612
  PV UUID               mcSYoC-774T-T6Qj-bk1g-juLe-bqfi-cPRBCS

ewen@nas01:/$

By default there is one volume group:

ewen@nas01:/$ sudo vgdisplay
  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.45 TiB
  PE Size               4.00 MiB
  Total PE              1429612
  Alloc PE / Size       1429612 / 5.45 TiB
  Free  PE / Size       0 / 0
  VG UUID               Qw9A2i-F3aQ-txow-XUIk-OP6o-pVCf-sIsz1g

ewen@nas01:/$

with a single volume in it:

ewen@nas01:/$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1000/lv
  LV Name                lv
  VG Name                vg1000
  LV UUID                KRcrco-cOGl-gdOt-GVJ7-IWvc-jogO-ZqyA4G
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                5.45 TiB
  Current LE             1429612
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:0

ewen@nas01:/$

I believe this is the result of going through the default setup process and choosing a "quick" volume -- resulting in a single volume on RAID. This appears to result in the single data RAID 1, with a single LVM volume group and logical volume -- and not be possible to shrink, or turn into a multi-volume setup without adding hard drives, which obviously is not possible in a two drive chassis.

After some reading, my aim is a SHR (Synology Hybrid RAID)/RAID 1 disk group, with about a 3.5TB disk volume for the initial storage, and the rest left for future use (either expanding the existing volume or, eg, presenting as iSCSI LUNs). In the case of a two drive system Synology Hybrid RAID is basically just a way to say "RAID 1", but possibly having it recorded on disk that way would allow transferring the disks to a larger (more drive bays) unit later on.

That 3.5TB layout is chosen knowing that the recommended Time Machine Server setup is to use a share out of a common volume, with a disk quota to limit the maximum disk usage -- rather than a separate volume, which was my original idea. (The DS216+ can also create a file-backed iSCSI LUN, but the performance is probably not as good, so I would rather keep my options open to have more than one volume.)

The DS216+ II (unlike the DS216j) will support btrfs as a local file system (on wikipedia), which is a Linux file system that has been "in development" for about 10 years, designed to compete with the ZFS file system originally developed by Sun Microsystems. Historically btrfs has been fairly untrusted (with multiple people reporting data loss in the early years), but it has been the default file system for SLES 12 since 2014, and it is also now the default file system for the DS216+. Apparently btrfs is also heavily used at Facebook. The stability of btrfs appears to depend on the features you need, with much of the core file system functionality being listed as "OK" in recent kernels -- which is around Linux 4.9 at present, about 4 years newer than the Linux 3.10 kernel, presumably with many patches, running on the DS216+. (Hopefully missing some or all of those 4 years of development does not cause btrfs stability issues...)

Since the btrfs metadata and data checksums seem useful in a large file system, and the snapshot functionality might be useful, I decided to stick with the Synology DS216+ default of btrfs. Hopefully the older Linux kernel (and thus older btrfs code) does not bite me! (The "quotas for shared folders" are also potentially useful, eg, for the Time Machine Server use case.)

Given that there is no way to (a) shrink a volume that I could find, and (b) no way to convert a volume to a disk group (without adding disks, that I cannot do), my next step was to delete the pre-configured, empty, volume so that I could start the disk layout again. To do this go to the main menu -> Storage Manager -> Volume, and choose to remove the volume.

There were two confirmation prompts -- one to remove the volume, and one "are you sure" warning that data will be deleted, and services will restart. Finally it asked for the account password before continuing, which is a useful verification step for such a destructive action (although you do have to remember which user you used to log in, and thus which password applies -- there does not seem to be anything displaying the logged in user).

The removal process is very thorough -- after removal there is no LVM configuration left on the system, and the md2 RAID array is removed as well:

ewen@nas01:/$ sudo lvdisplay
ewen@nas01:/$ sudo vgdisplay
ewen@nas01:/$ sudo pvdisplay
ewen@nas01:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [2/2] [UU]

unused devices: <none>
ewen@nas01:/$

so you are effectively back to a bare system, which amongst other things will mean that the RAID array gets rebuilt from scratch. (I had sort of hoped to avoid that for time reasons -- but at least forcing it to be rebuilt will also force a check of reading/writing the disks, which is a useful step prior to trusting it with "real" data.)

Once you are back to an empty system, it is possible to go back through the volume creation wizard and choose "custom" and "multi-volume", but I chose to explicitly create the Disk Group first, by going to Storage Manager -> Disk Group, and agreeing to use the two disks that it found. There was a warning that all data on the disks would be erased, and then I could choose the desired RAID mode -- I choose Synology Hybrid RAID (SHR) to leave my options open, as discussed above. I also chose to perform the optional disk check given that these are new drives which I have not tested before. Finally it wanted a description for the disk group, which I have called "shr1". (An example with pictures.)

Once that was applied (which took a few seconds as described in the wizard) there was a new md2 raid partition on the disk, which was rebuilding:

ewen@nas01:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]
      5855691456 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.0% (2592768/5855691456) finish=790.1min speed=123465K/sec

md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [2/2] [UU]

unused devices: <none>
ewen@nas01:/$

as well as new LVM physical volumes and volume groups:

ewen@nas01:/$ sudo pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1
  PV Size               5.45 TiB / not usable 704.00 KiB
  Allocatable           yes
  PE Size               4.00 MiB
  Total PE              1429612
  Free PE               1429609
  Allocated PE          3
  PV UUID               l03e6f-X3Wa-zGsW-a6yo-3NKG-5YI9-5ghHit

ewen@nas01:/$ sudo vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.45 TiB
  PE Size               4.00 MiB
  Total PE              1429612
  Alloc PE / Size       3 / 12.00 MiB
  Free  PE / Size       1429609 / 5.45 TiB
  VG UUID               RjMnEQ-IKst-3N2V-3vJb-s8GE-15RO-qQOdOc

ewen@nas01:/$

And to my surprise there was even a small LVM logical volume:

ewen@nas01:/$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                4IdgrT-c5A6-3IOo-6Tq6-3rej-9nL9-i2SQou
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           253:0

ewen@nas01:/$

That syno_vg_reserved_area volume seems to appear in other installs too, but I do not know what it is used for (other than perhaps as a marker that there is a "real" Disk Group and multiple volumes).

Since even once the MD RAID 1 rebuild picked up to full speed:

ewen@nas01:/$ cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid1 sdb5[1] sda5[0]
      5855691456 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.6% (40464000/5855691456) finish=585.6min speed=165497K/sec

md1 : active raid1 sda2[0] sdb2[1]
      2097088 blocks [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
      2490176 blocks [2/2] [UU]

unused devices: <none>
ewen@nas01:/$

it was going to take about 10 hours to finish the rebuild, I left the DS216+ to its own devices overnight before carrying on.

As an aside, since there is an implicit "Disk Group" (RAID set, LVM volume group) even in the "One Volume" case, it is not obvious to me why the Synology DSM chose to also delete the original "Disk Group" (RAID set) when the single volume was deleted -- it could have just dropped the logical volume, and left the RAID alone, saving a lot of disk IO. Possibly the quick setup should more explicitly create a Disk Group, so a more easy transition becomes an obvious option, rather than retaining what appears to be two distinct code paths.

By the next morning the RAID array had rebuilt. I then forced an extended SMART disk check on each disk in turn by going to Storage Manager -> HDD/SSD -&gt, highlighting the disk in question, and clicking on "Health Info", then setting up the test in the "S.M.A.R.T Test" tab. Each Extended Disk Test took about 11 hours, which I left running while doing other things. I did them approximately one at a time, so that the DS216+ RAID array could still be somewhat responsive -- but ended up with a slight overlap as I started the second one just before going to bed, and the first one had not quite finished by then. (It turns out that I got a bonus second extended disk check on the first disk, because there is a Smart Test scheduled to run once a week on all disks starting at 22:00 on Saturday -- and that must have kicked in on the first disk minutes after the one I manually started in the morning finished, but of course by then the manual one on the second disk was already running.)

The results of the S.M.A.R.T tests are visible in the "History" tab of the "Health Info" page for each drive (in Storage Manager -> HDD/SSD), and I also checked them via the ssh connection:

ewen@nas01:/$ sudo smartctl -d ata -l selftest /dev/sda
smartctl 6.5 (build date Feb 14 2017) [x86_64-linux-3.10.102] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%       105         -
# 2  Extended offline    Completed without error       00%        92         -
# 3  Short offline       Completed without error       00%        63         -
# 4  Extended offline    Completed without error       00%        42         -

ewen@nas01:/$ sudo smartctl -d ata -l selftest /dev/sdb
smartctl 6.5 (build date Feb 14 2017) [x86_64-linux-3.10.102] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF READ SMART DATA SECTION ===
SMART Self-test log structure revision number 1
Num  Test_Description    Status                  Remaining  LifeTime(hours)  LBA_of_first_error
# 1  Extended offline    Completed without error       00%       105         -
# 2  Short offline       Completed without error       00%        63         -
# 3  Extended offline    Completed without error       00%        42         -

ewen@nas01:/$

just to be sure I knew where to find them later. (That also reveals that there was an Extended test and a short test done before the drives were shipped to me; presumably by the distributor of the "DS216+ and drives" bundle.)

Once that was done, I created a new Volume to hold the 3.5TB of data that I had in mind originally, leaving the remaining space for future expansion. Since there was already a manually created Disk Group, the Storage Manager -> Volume -> Create process automatically selected a Custom setup (and Quick was greyed out). It also automatically selected Multiple Volumes on RIAD (and Single Volume on RAID was greyed out), and "Choose an existing Disk Group" (with "Create a new Disk Group" being greyed out) since there are only two disks in the DS216+ both used in the Disk Group created above.

It told me there was 5.45TB available, which is about right for "6" TB drives less some overhead for the DSM software install (about 4.5GB AFAICT -- 2.4GB for root on md0 and 2GB for swap on md1). As described above I chose btrfs for the disk volume, and then 3584 GB (3.5 * 1024) for the size (out of a maximum 5585 GB available, so leaving roughly 2TB free for later use). For the description I used "Shared data on SHR1" (it appears to be used only within the web interface and editable later). After applying the changes there was roughly 3.36 TiB available in the volume (with 58.7MB used by the system -- I assume file system structure) -- and a /dev/vg1/volume_1 volume created in the LVM:

ewen@nas01:/$ sudo lvdisplay
  --- Logical volume ---
  LV Path                /dev/vg1/syno_vg_reserved_area
  LV Name                syno_vg_reserved_area
  VG Name                vg1
  LV UUID                4IdgrT-c5A6-3IOo-6Tq6-3rej-9nL9-i2SQou
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 0
  LV Size                12.00 MiB
  Current LE             3
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           253:0

  --- Logical volume ---
  LV Path                /dev/vg1/volume_1
  LV Name                volume_1
  VG Name                vg1
  LV UUID                J9FKic-QYdA-mTCK-W01z-dO7V-GDk6-JD41mC
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                3.50 TiB
  Current LE             917504
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     4096
  Block device           253:1

ewen@nas01:/$

which shows a 3.5TiB volume. There is 1.95TiB left:

ewen@nas01:/$ sudo vgdisplay
  --- Volume group ---
  VG Name               vg1
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               1
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               5.45 TiB
  PE Size               4.00 MiB
  Total PE              1429612
  Alloc PE / Size       917507 / 3.50 TiB
  Free  PE / Size       512105 / 1.95 TiB
  VG UUID               RjMnEQ-IKst-3N2V-3vJb-s8GE-15RO-qQOdOc

ewen@nas01:/$

for future expansion (either of that volume, or creating new volumes).

The new volume was automatically mounted on volume1:

ewen@nas01:/$ mount | grep vg1-volume_1
/dev/mapper/vg1-volume_1 on /volume1 type btrfs (rw,relatime,synoacl,nospace_cache,flushoncommit_threshold=1000,metadata_ratio=50)
ewen@nas01:/$

ready to be used (eg, by creating shares).

From here I was ready to create shares for the various data that I wanted to store, which I will do over time. It appears that thanks to choosing btrfs I can have quotas on the shares as well as the users, which may be useful for things like Time Machine backups.

ETA, 2017-04-23: Some additional file sharing setup:

  • In Control Panel -> File Services -> SMB/AFP/NFS -> SMB -> Advanced Settings, change the Maximum SMB protocol to "SMB3" and the Minimum SMB Protcocol to "SMB2" (Pro Tip: Stop using SMB1!)

  • Also in Control Panel -> File Services -> SMB/AFP/NFS -> SMB -> Advanced Settings, tick "Allow symbolic links within shared folders"

  • In Control Panel -> File Services -> SMB/AFP/NFS -> NFS, tick "Enable NFS" to simplify automatic mounting from Linux systems without passwords. Also tick "Enable NFSv4 support" to allow NFSv4 mounting, which allows more flexiblity around authentication and UID/GID mapping than earlier NFS versions (earlier NFS versions basically assumed you had a way to enforce the same UID/GID enterprise wide, via NIS, LDAP or similar).

Once that is done, new file shares can be created in Control Panel -> Shared Folder -> Create. With btrfs you also get an Advanced -> "Enable advanced data integrity protection" which seems to be on by default, and is useful to have enabled. If you do not want a #recycle directory in your share it is best to untick the "Enable Recycle Bin" option on the first page (that seems most useful on shares intended for Microsoft Windows Systems, and an annoying top level directory anywhere else).

Once the shared folder is created you can grant access to users/groups, and if NFS is turned on you can also grant access to machines (since NFS clients are authenticated by IP) in the "NFS Permissions" tab. Obviously you then have all the usual unix UID/GID issues after that if you are using NFS v3 or NFSv4 without ID mapping, and do not have synchronised UID/GID values across your whole network (which I do not, not least of which is because the Synology DS216+ makes up its own local uid values).

I had hoped to get NFS v4 ID mapping working, by setting the "NFSv4 domain" to the same string on the Synology DS216+ and the clients (on the Synology it appears to default to an empty string; on Linux clients it effectively defaults to the DNS domain name). But even setting both of those (in /etc/idmapd.conf on Linux) did not result in idmapping happening :-( As best I can tell this is because Linux defaults to sec=sys for NFSv4 mounts, the Synology DS216+ default to AUTH_SYS (which turns into sec=sys) for NFS shares, and UID mapping does not happen with sec=sys, because what is passed over the wire is still NFS v3 style UID/GID. (See confirmation from Linux NFS Maintainer that this is intended by modern NFS; the same confirmation can be found in RFC 7530.) Also of note, in sec=sys (AUTH_SYS) NFS UID/GID values are used for authentication, even if file system UID/GID mapping is happening for what is displayed, which causes confusion. (From my tests no keys appear in /proc/keys, indicating no ID mappings are being created.)

There is no UID/GID mapping because of /sys/module/nfs/parameters/nfs4_disable_idmapping being set to "Y" by default on the (Linux) client, and /sys/module/nfsd/parameters/nfs4_disable_idmapping being set to "Y" by default on the Synology DS216+. Which is change from 2012 to the client, and another change from 2012 for the server, apparently for backwards compatiblity with NFS v3. These changes appear to have landed in Linux 3.4; and both my Linux client and the Synology have Linux kernels greater than 3.4.

The idea seems to be that if the unix UID/GID (ie, AUTH_SYS) are used for authentication then they should also be used in the file system, as happened in NFS v3 (to avoid files being owned by nobody:nogroup due to mapping failing). The default is thus to disable the id mapping at both ends in the sec=sys / AUTH_SYS case. It is possible to change the default on the Linux client (eg, echo "N" into /sys/module/nfs/parameters/nfs4_disable_idmapping), but I cannot find a way to persistently change it on the Synology DS216+. Which means that NFS v4 id mapping can really only be used with Kerberos-based authentication :-( (In sec=sys mode, you can see the UID/GID going over the wire, so idmap does not work. This is mostly a NFS, and NFS v4 in particular, issue rather than a Synology NAS issue as such.)

Anyway effectively this means that in order to use the UID/GID mapping in NFS v4, you need to set up kerberos authentication, and then presumably add those Kerberos keys into the Synology DS216+ in Control Panel -> File Services -> SMB/AFP/NFS -> NFS -> Advanced Settings -> Kerberos Settings, and set up the ID mapping. All of which feels like too much work for now. (It seems other Synology users wish UID/GID mapping worked without Kerberos too; it is unfortunate there is no UID:UID mapping option available as a NFS translation layer, but that is not the approach taken by NFS v4. The only references I find to a NFS server with UID:UID mapping was the old Linux user-mode NFS server with map_static, which is no longer used, and thus not available on a Synology NAS.)

It is possible to set NFS "Squash: Map all users to admin" to create effectively a single UID file share, which is sufficient for some of my simple shares (eg, music), so that is what I have done for now. (See a simple example with screenshots and another example with screenshots; see also Synology notes on NFS Security Flavours.)

Setting "Squash: Map all users to admin" in the UI, turns into all_squash,anonuid=1024,anongid=100 in /etc/exports:

ewen@nas01:/$ sudo cat /etc/exports; echo

/volume1/music  172.21.1.0/24(rw,async,no_wdelay,all_squash,insecure_locks,sec=sys,anonuid=1024,anongid=100)
ewen@nas01:/$

and results in files that are owned by uid 1024, and gid 100 no matter which user created them. Which I could then mount on my Linux client with:

ewen@client:~$ sudo mkdir /nas01
ewen@client:~$ sudo mkdir /nas01/music
ewen@client:~$ sudo mount -t nfs -o hard,bg,intr,rsize=65536,wsize=65536  nas01:/volume1/music /nas01/music/

and then look at with:

ewen@client:~$ ls -l /nas01/music/
total 0
drwxrwxrwx 1 1024 users 1142 Sep 10  2016 flac
ewen@client:~$

For my network that is mostly acceptable for basic ("equal access for all") file shares, as gid 100 is "users" on my Linux machines, and thus most machines have my user in that group. (Unfortuantely there is no way in the UI to specify that all access should be squashed to a specific user-specified uid, or I would squash them to my own user in these simple cases. There is also no apparent way to assign uids to the Synology DS216+ users when they are created, so pesumably the only way to set the UIDs of users is by having them supplied by a directory server like LDAP.)

The main issue I notice (eg, with rsync) is that attempts to chown files as root or chgrp files as root fail with "Invalid argument" so this will not work for anything requiring "root" ownership. (I found this while rsyncing music onto the share, but all the instances of music files owned by root are mistakes, so I fixed them at the source and re-ran rsync.)

For more complicated shares I probably either need to use SMB mounts, with appropriate username/password authentication to get access to the share as that user (which also effectively results in single-user access to the share, but will properly map the user values for the user I am accessing as). Or to dedicate the NFS share to a single machine, in which case it can function without ID mapping, as the file IDs will be used only by that machine.

Note that on OS X cifs:// forces SMBv1 over TCP/445, and we turned SMBv1 off above -- so use smb:// to connect to the NAS from OS X Finder (Go -> Connect to Server... (Apple-K)), which will use SMB 2.0 since OS X 10.9 (Mavericks). (CIFS is rarely used these days; instead SMB2 and SMB3 are used, which also work over TCP/445; TCP/445 was one of the original dinstinguishing things of the original Microsoft CIFS implemenation. By contrast the Linux kernel "CIFS" client supports SMB 2.0 since Linux 3.7, so Linux has hung onto the CIFS name longer than other systems; it now supports CIFS, SMB2, SMB2.1 and SMB3, which was implemented by the Samba team.)

On a related note, while testing git-annex on a SMB mount I encountered a timeout, so I ended up installing a later version of git-annex. That allowed git annex init to complete, but transferring files around still failed with locking issues. (Possibly the ssh shell, and git server application for the Synology NAS provides another path to getting git annex working? See example of using git server application. Or using that plus a stand alone build of git-annex on the Synology NAS. Another option is the git annex rsync special remote, but that is content only and I think might only have the internal (SHA hash) filenames.)

ETA, 2017-05-26: While trying to patch the Synology for the Samba bug (fixed in 6.1.1-15101-4) I ran into the "Cannot connect to the Internet" issue, in the update screen. Despite having working IPv4 and IPv6 connectivity (as tested from a ssh sesison). On a hunch I checked the IPv6 settings, and found that the IPv6 DNS server there was pointed at my ISP supplied home gateway rather than my internal DHCP server (used for IPv4) -- which resulted in the Synology trying to use both. So I tried disabling IPv6, but that did not seem sufficient (it is not clear if it ever tried reconnecting); but a reboot with IPv6 did seem to be sufficient. Since I am not actively using IPv6 internally at present, for now I am going to leave IPv6 turned off on the Synology to see if that makes any difference. (My desktops have not had any issues with IPv6 being enabled on my home gateway, but they appear to only be using the internal DNS server AFAICT -- so maybe the issue is the DNS server on the home gateway not responding? In which case perhaps a static IPv6 configuration would fix the issue.)

Unfortunately it does not seem to be well documented precisely what the web interface tries to connect to, and when, to find out if there are updates -- which makes debugging the exact root cause more difficult. However there are forum posts on how to do the upgrade from the ssh shell using the synoupgrade tool, which may help if the problem returns later.

ETA, 2017-07-09: To be able to use network rsync to copy content onto the Synoglogy DS216+, it is necessary to go to Control Panel -> File Services -> rsync and tick "Enable rsync service" -- even for ssh-based rsync to work. Before that, ssh-based rsync failed with:

Permission denied, please try again.
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(235) [sender=3.1.2]

even with a valid, superuser, account used to ssh in; it took some hunting to find this cause. After enabling the rsync service, rsync is running in daemon mode, listening on TCP/873, which may present an additional security risk, although /etc/rsyncd.conf is nearly empty. It is unclear why the rsync service (daemon) and rsync-over-ssh are tied together like this, particularly when the rysnc privilege seems to default to "Allow" in the Control Panel -> Privileges panel (and for my user). My guess is that this is somehow linked to the PAM authentication, because the same rsync binary seems to run in both cases.

Note that toggling this rsync service also seems to disconnect active ssh sessions; so you will need to reconnect via ssh.