Tuesday, April 14, 2015

Cross-Compile nano for the DS215j

I know if you're a hard-core Linux nerd that you're supposed to swear an oath on your grandmother's grave that you use nothing but vi for editing text files.  Fortunately, that's not me, so I much prefer something like nano.

Since there is no distribution of nano for DSM, we will have to build it. I searched around for some posts related to it, and lucky for me, I found this guy: http://pcloadletter.co.uk/2014/09/17/nano-for-synology/ Awesome!  So, it must not be too bad, right?  Well, maybe.  I had to take some time to dissect the separate steps in there and figure out what was going on.  There were a couple of extra 'hacky' steps and I wanted to see if and why they were needed.

Here's what I found and what I did, based on that original script.

First, there's the typical setup of the environment variables for cross-compiling:
(I'm sure the extra CFLAGS aren't necessary here, since I'm sure the nano code isn't doing any floating point math...)
export TOOLCHAIN=/usr/local/armv7-marvell-linux-gnueabi-hard
export CC=${TOOLCHAIN}/bin/arm-marvell-linux-gnueabi-gcc
export CXX=${TOOLCHAIN}/bin/arm-marvell-linux-gnueabi-g++
export LD=${TOOLCHAIN}/bin/arm-marvell-linux-gnueabi-ld
export AR=${TOOLCHAIN}/bin/arm-marvell-linux-gnueabi-ar
export RANLIB=${TOOLCHAIN}/bin/arm-marvell-linux-gnueabi-ranlib
export CFLAGS="-I${TOOLCHAIN}/arm-marvell-linux-gnueabi/libc/include -mhard-float -mfpu=vfpv3-d16"
export LDFLAGS="-L${TOOLCHAIN}/arm-marvell-linux-gnueabi/libc/lib"

Next, there is a dependent library called ncurses, that is already installed on the DiskStation.  We need to build it here, so that the nano code can link to it.  Our friend at pcloadletter pulled that library from the DSM source that is available online and posted just the part we need on Dropbox (thanks!!), so we can download it and configure it for compiling.  Here's what those steps look like:
wget https://dl.dropboxusercontent.com/u/1188556/ncurses-5.x.zip
unzip ncurses-5.x.zip
cd ncurses-5.x
./configure --prefix=/home/ubuntu/ncurses --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --with-shared --without-manpages --without-normal --without-progs --without-debug --enable-widec

So far, it looks almost the same as the original script. I did, however, set the prefix parameter to point to /home/ubuntu/ncurses, which will be the target location for the 'make install' command. That directory doesn't exist, so we need to make it.
cd ..
mkdir ncurses
cd ncurses-5.x
make
make install

If that builds correctly, your /home/ubuntu/ncurses and /home/ubuntu/ncurses/lib directories should look like this:
Now we're (almost) ready to build nano. Let's get the nano source code and unzip it and create the target directory for the final executable.
cd ~
wget http://www.nano-editor.org/dist/v2.2/nano-2.2.6.tar.gz
tar xvzf nano-2.2.6.tar.gz
mkdir nano

Here's the part where I started to get a little confused with the original script. Why do we need to patch the nano source code and then why do we need to effectively patch (using sed, which I had to lookup, btw, since I had no idea what that was...) the generated Makefile? After much trial and error and poking around, it looks like there is a mismatch between the relative paths in the nano source and the paths that the configure input script are looking for.

The configure script is looking for ncursesw/ncurses.h somewhere on the search path and parts of the source code refer to ncurses.h, without a relative path. So, rather that having to change the source code, let's just put both path references in the include search path. So, we can append the CFLAGS path, append the library path (LDFLAGS) so that it points to the ncurses library we just built and also add a C++ include path (CPPFLAGS) like this:
export CFLAGS="${CFLAGS} -I/home/ubuntu/ncurses/include -I/home/ubuntu/ncurses/include/ncursesw"
export LDFLAGS="${LDFLAGS} -L/home/ubuntu/ncurses/lib"
export CPPFLAGS="-I/home/ubuntu/ncurses/include -I/home/ubuntu/ncurses/include/ncursesw"

Now we're ready to (finally) build nano:
cd nano-2.2.6
./configure --prefix=/home/ubuntu/nano --host=armle-unknown-linux --target=armle-unknown-linux --build=i686-pc-linux --enable-utf8 --disable-nls --enable-color --enable-extra --enable-multibuffer --enable-nanorc
make
make install

If we haven't missed anything, then nano should build correctly and your /home/ubuntu/nano/bin directory looks like this:

Anyone know what the secret is to creating the final executable without the architecture name being tacked on? Also, now that I've built this, I really have no idea how to package this up and get it to the DiskStation so that all the folders are intact, etc. That's what I'll try to dig into next.

Tuesday, March 31, 2015

Cross Compile in the Cloud

In my previous post I explained why I want to compile my own code for the Synology DiskStation DS215j.  Being an admitted non-Linux-dev noob, I wasn't familiar with the concept of cross-compiling and I had heard of "Toolchains", but I wasn't entirely sure why you wanted or needed one.  Well, now I do.

Synology has a decent 3rd party development guide document on their website that gives a pretty basic explanation of what needs to be setup to cross compile for a DiskStation.  In case you don't know how Google search works, here is the link:
https://global.download.synology.com/download/Document/DeveloperGuide/DSM_Developer_Guide.pdf

Right off the bat, I'm going to need a 32bit Linux dev host.  Said it before and I'll say it again, I'm not a Linux guy, so I don't have a Linux box just laying around and I don't feel like goofing around with dual-booting my personal Windows desktop.  So - since I won't be using it that much, how about a cloud IaaS resource to run Linux?

Through my work I have an MSDN subscription.  (It's a great benefit!) With that subscription comes a certain annual free "allowance" of Azure services - assumed to be used for dev purposes, of course.  That sounds perfect.  Except.  ALL Azure VM instances are 64bit OS images. No 32bit support. Crap.

OK, well, we're also working with Google in my work and I have access to a couple of non-production Google for Work domain instances and I have the ability to provision a limited amount of GAE/GCE resources - free to me.  Great!  Except.  GCE has the same limitation - only 64 bit OS images.  Crap.

So, leaving the realm of "free", I come to AWS.  I have an AWS account already since I have been using Glacier for backup basically since it was initially offered (how can you go wrong for $.01 a month per GB!), but I have never worked with EC2, their IaaS offering.  I do some reading and discover that you can have 32bit OS images on EC2.  Finally!

When you walk through the EC2 instance setup process, you have to pick what Amazon calls an AMI (Amazon Machine Image).  There are currently 22 "quick start" images with various OSs. Only 2 of those choices are 32bit - and they are both Windows Server images.  (Seriously?  Why is anyone running Windows Server as a 32bit OS??)  So the next step is to search the "Community AMIs".  Wow.  So my choices went from 22 to 39,690 and counting.  Seriously - how am I supposed to know what image to pick from a list of almost 40K?

I've actually worked a little with Ubuntu before, so I'll pick Ubuntu and of course it has to be 32bit.

Great - so that narrows my choices to just over 6,100.  Now, what do I do?  I notice that most of these images names are labeled "server".  So - I add a search string for "desktop" instead.  I have no idea what difference that makes - other than I guess a server image wouldn't have a desktop interface pre-installed.  Not that I'm going to be using that, but who knows.

After that, I just pick an Ubuntu Long Term Support (LTS) release - 12.04 LTS.  Again, there are multiple images for this release, so in the end I just picked one from the list. Not until after I already had the image setup did I realize that the Ubuntu website has a page to help you pick the right AMI:
http://cloud-images.ubuntu.com/locator/ec2/  Oh, well.

I picked the smallest instance type available.  (At the time, it was t1.micro, which isn't available anymore for some reason...)  The instance gets billed at $.02/hour, with a 1-hour minimum each time it gets started up.  Not bad at all.  SSH is the default method of accessing the instance and their website does a good job of explaining how to use Putty and/or WinSCP to connect to the instance once you have turned it on.  The only part that may be slightly confusing if you've used other cloud IaaS providers is that the public DNS for your instance changes when you stop it and then restart it. So, you will need to update the WinSCP/Putty connection each time with the info from the AWS instance console.

Connect with Putty and you get the most comforting UI ever.
OK, so nobody said it would be pretty.

First step - find the Synology toolchain that matches your DiskStation.  For a DS215j running DSM 5.1 your choices are here.  I had to do some reading to figure out why I would want the softfp version of the toolchain or the hard.  I want the hard version because I am looking to take full advantage of the FPU that is part of the Armada 375 architecture.

So - I'm following the doc from Synology and aside from the fact that it hasn't been updated to include any references to the DS215j architecture, it says to extract the toolchain files to /usr/local - as an example.

OK - so
cd /usr/local
wget http://iweb.dl.sourceforge.net/project/dsgpl/DSM%205.1%20Tool%20Chains/Marvell%20Armada%20375%20Linux%203.2.40/gcc464_glibc215_hard_armada375-GPL.tgz
Right?  Nope.  You need root access to write to /usr/local

First rule of Linux, if at first you don't succeed, just try it again with "sudo".
sudo wget http://iweb.dl.sourceforge.net/project/dsgpl/DSM%205.1%20Tool%20Chains/Marvell%20Armada%20375%20Linux%203.2.40/gcc464_glibc215_hard_armada375-GPL.tgz
sudo tar zxpf gcc464_glibc215_hard_armada375-GPL.tgz -C /usr/local/
sudo rm gcc464_glibc215_hard_armada375-GPL.tgz
This downloads the file with the toolchain, unzips everything and then, for good measure, we cleanup the original file. If all is well, your /usr/local directory now looks like this:

Now we have an environment to do some dev work.  Next post, we'll see what we can get built...

Stale Optware

The question of "how do I install <xyz> on my DiskStation?" is usually answered by the instruction to install optware on the device.  This is done via a process called Bootstrapping.  There is a wiki page that covers all of this:

http://forum.synology.com/wiki/index.php?title=Overview_on_modifying_the_Synology_Server,_bootstrap,_ipkg_etc#Installing_compiled.2Fbinary_programs_using_ipkg

I followed these instructions and had no problems at all on my DS211j.  The issue now is that no one seems to be updating any of the optware packages and the "current" recommendation on the IPKG Synology forum is to manually bootstrap the device and point it to a stale package that is "somewhat" compatible with the newer ARM-based Synology devices.

I'm sure this is fine for 90% of the users that just want to be able to run "sudo" or whatever, but it bugs me that I have a multi-core ARMv7 device that is running stale, unoptimized ARMv5-compiled code.

So - I'm probably just being a huge nerd, but that goes without saying.  I'd rather do this the hard way (and hopefully learn something in the process).  I won't be installing IPKG optware on my DS215j.  I will compile from source, given the choice (or install someone else's package).  That means I have no build environment on the DS itself and I will need to setup a build environment somewhere else.  Fortunately, Synology provides guidance and tools for that, too.

Friday, March 27, 2015

Out with the Old

So, part of the reason for this blog is that I recently bought a new Synology DiskStation DS215j to replace my 4-year-old DS211j. (Side note: my old 211j just sold on eBay for just over $100, without drives - so those things really hold value!)

As with all things IT, the newer units have more horsepower and more RAM.  Now, mind you, the wonderful thing about Linux is that you can run it on an appliance like the DS and only need 512MB RAM.  That's still amazing to me.  My newest work laptop has 12GB of RAM (GB!)- and I still bump into that limit.  Of course - that's because I have dozens of Chrome tabs open all the time - but that's another story.

The DS211j has a single ARMv5 CPU (The Marvell Kirkwood mv6281 - to be precise http://www.marvell.com/embedded-processors/kirkwood/assets/88F6281-004_ver1.pdf ) and only 128MB RAM.  It wasn't the fastest thing ever, but it did what I wanted it to.

The DS215j is a dual-core ARMv7 CPU (Marvell Armada 375  http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA_375_SoC-01_product_brief.pdf ) and 512 MB RAM.  The Armada 375 System on a Chip (SoC) also includes an onboard Floating Point Unit (FPU) for heavy math-intensive processing and also support for the NEON extended set of instructions.  I'm not 100% on the NEON capability, but the FPU is a big deal, particularly for things like audio/video transcoding.

I guess that gets to the question of "why do I have a DiskStation?" and "why would I want to do custom dev work on a DiskStation?"  Most of what I bought the DiskStation for, I can do with the standard packages available from Synology:

  • Backup for all the PCs in the house
  • Shared storage of all the family pictures and video
  • iTunes Server for shared family music
  • DLNA/Media Server, which means all the stored pictures/music/video can be played back on any DLNA device in the house (PS3, "smart" TV, etc)

About a couple years ago, I heard about SickBeard and SABnzbd.  Of course, they had been around for awhile, but I had no idea.  By that time, I figured, like a lot of people, that we were paying way too much for cable TV that we rarely watched. So - no more cable TV.  Of course, Comcast is still getting our money since that's currently the fastest Internet provider in our area (c'mon Google Fiber - hurry up!!)

So, now some new things got added to the list of things running on the DiskStation:
  • Sickbeard
  • SABnzbd
  •  ... and ...
And the "and" part is where it gets interesting.  Shows that you pull via Sickbeard come in a variety of formats/containers.  And not all of those containers play well with all the devices that you may want to watch a show on (I'm looking at you, PS3).  So - that means you have to transcode the file to make it watchable on all devices.  That means running something else to do that transcoding.  I'll get to that next.

About this blog

I have worked in the IT industry for 20+ years and my exposure to Linux has been almost zero. At least from a professional standpoint, that's the case.  I have years of experience building code and experimenting with software in the various flavors of Windows over those years, but when it comes to working in Linux I always feel like a complete noob.

Buying a Synology DiskStation about 4 years ago wasn't my first introduction to Linux, but it was the first time I had a dedicated device at home that I could tweak and hack on and actually make something useful for the family.

In working with that first DS211j I found that whenever I needed some help, much of the assistance I found came from searches that resulted in finding my answers in other people's blog posts.  It's only taken me 4 years, but maybe if I blog my various nerdy adventures in Linux, someone else will stumble across my blog and find something useful.  Maybe.

If nothing else, it's a roadmap that I can look back at when I ask the question "how exactly did I get to this point?"