# Introducing mkosi

After blogging about casync I realized I never blogged about the mkosi tool that combines nicely with it. mkosi has been around for a while already, and its time to make it a bit better known. mkosi stands for Make Operating System Image, and is a tool for precisely that: generating an OS tree or image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite well known and popular. But mkosi has a number of features that I think make it interesting for a variety of use-cases that other tools don't cover that well.

# What is mkosi?

What are those use-cases, and what does mkosi precisely set apart? mkosi is definitely a tool with a focus on developer's needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then "production-ready", and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed with casync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on today's technology, not yesteryear's. Specifically this means that we'll generate GPT partition tables, not MBR/DOS ones. When you tell mkosi to generate a bootable image for you, it will make it bootable on EFI, not on legacy BIOS. The GPT images generated follow specifications such as the Discoverable Partitions Specification, so that /etc/fstab can remain unpopulated and tools such as systemd-nspawn can automatically dissect the image and boot from them.

So, let's have a look on the specific images it can generate:

1. Raw GPT disk image, with ext4 as root
2. Raw GPT disk image, with btrfs as root
3. Raw GPT disk image, with a read-only squashfs as root
4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
5. A btrfs subvolume on disk, similar to the plain directory
6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional options are available:

1. A swap partition may be added in
2. The system may be made bootable on EFI systems
3. Separate partitions for /home and /srv may be added in
4. The root, /home and /srv partitions may be optionally encrypted with LUKS
5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build images based on the following Linux distributions:

1. Fedora
2. Debian
3. Ubuntu
4. ArchLinux
5. openSUSE

Note though that not all distributions are supported at the same feature level currently. Also, as mkosi is based on dnf --installroot, debootstrap, pacstrap and zypper, and those packages are not packaged universally on all distributions, you might not be able to build images for all those distributions on arbitrary host distributions. For example, Fedora doesn't package zypper, hence you cannot build an openSUSE image easily on Fedora, but you can still build Fedora (obviously…), Debian, Ubuntu and ArchLinux images on it just fine.

The GPT images are put together in a way that they aren't just compatible with UEFI systems, but also with VM and container managers (that is, at least the smart ones, i.e. VM managers that know UEFI, and container managers that grok GPT disk images) to a large degree. In fact, the idea is that you can use mkosi to build a single GPT image that may be used to:

1. Boot on bare-metal boxes
2. Boot in a VM
3. Boot in a systemd-nspawn container
4. Directly run a systemd service off, using systemd's RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used if available to ensure the image is not tempered with (yes, you read that right, systemd-nspawn and systemd's RootImage= setting automatically do dm-verity these days if the image has it.)

# Mode of Operation

The simplest usage of mkosi is by simply invoking it without parameters (as root):

# mkosi


Without any configuration this will create a GPT disk image for you, will call it image.raw and drop it in the current directory. The distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is put together, i.e. select package sets, select the distribution, size partitions and so on. Most of that you can actually specify on the command line, but it is recommended to instead create a couple of mkosi.$SOMETHING files and directories in some directory. Then, simply change to that directory and run mkosi without any further arguments. The tool will then look in the current working directory for these files and directories and make use of them (similar to how make looks for a Makefile…). Every single file/directory is optional, but if they exist they are honored. Here's a list of the files/directories mkosi currently looks for: 1. mkosi.default — This is the main configuration file, here you can configure what kind of image you want, which distribution, which packages and so on. 2. mkosi.extra/ — If this directory exists, then mkosi will copy everything inside it into the images built. You can place arbitrary directory hierarchies in here, and they'll be copied over whatever is already in the image, after it was put together by the distribution's package manager. This is the best way to drop additional static files into the image, or override distribution-supplied ones. 3. mkosi.build — This executable file is supposed to be a build script. When it exists, mkosi will build two images, one after the other in the mode already mentioned above: the first version is the build image, and may include various build-time dependencies such as a compiler or development headers. The build script is also copied into it, and then run inside it. The script should then build whatever shall be built and place the result in $DESTDIR (don't worry, popular build tools such as Automake or Meson all honor $DESTDIR anyway, so there's not much to do here explicitly). It may also run a test suite, or anything else you like. After the script finished, the build image is removed again, and a second image (the final image) is built. This time, no development packages are included, and the build script is not copied into the image again — however, the build artifacts from the first run (i.e. those placed in $DESTDIR) are copied into the image.

4. mkosi.postinst — If this executable script exists, it is invoked inside the image (inside a systemd-nspawn invocation) and can adjust the image as it likes at a very late point in the image preparation. If mkosi.build exists, i.e. the dual-phased development build process used, then this script will be invoked twice: once inside the build image and once inside the final image. The first parameter passed to the script clarifies which phase it is run in.

5. mkosi.nspawn — If this file exists, it should contain a container configuration file for systemd-nspawn (see systemd.nspawn(5) for details), which shall be shipped along with the final image and shall be included in the check-sum calculations (see below).

6. mkosi.cache/ — If this directory exists, it is used as package cache directory for the builds. This directory is effectively bind mounted into the image at build time, in order to speed up building images. The package installers of the various distributions will place their package files here, so that subsequent runs can reuse them.

7. mkosi.passphrase — If this file exists, it should contain a pass-phrase to use for the LUKS encryption (if that's enabled for the image built). This file should not be readable to other users.

8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an X.509 key pair to use for signing the kernel and initrd for UEFI SecureBoot, if that's enabled.

# How to use it

So, let's come back to our most trivial example, without any of the mkosi.$SOMETHING files around: # mkosi  As mentioned, this will create a build file image.raw in the current directory. How do we use it? Of course, we could dd it onto some USB stick and boot it on a bare-metal device. However, it's much simpler to first run it in a container for testing: # systemd-nspawn -bi image.raw  And there you go: the image should boot up, and just work for you. Now, let's make things more interesting. Let's still not use any of the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw


This is similar as the above, but we made three changes: it's no longer GPT + ext4, but GPT + btrfs. Moreover, the system is made bootable on UEFI systems, and finally, the output is now called foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw


This will look very similar to the systemd-nspawn invocation, except that this uses full VM virtualization rather than container virtualization. (Note that the way to run a UEFI qemu/kvm instance appears to change all the time and is different on the various distributions. It's quite annoying, and I can't really tell you what the right qemu command line is to make this work on your system.)

Of course, it's not all raw GPT disk images with mkosi. Let's try a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux


Of course, if you generate the image as plain directory you can't boot it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs


In this mode we explicitly pick Fedora as the distribution to use, ask mkosi to generate a compressed GPT image with a root squashfs, compress the result with xz, and generate a SHA256SUMS file with the hashes of the generated artifacts. The package will contain the SSH client as well as everybody's favorite editor.

Now, let's make use of the various mkosi.$SOMETHING files. Let's say we are working on some Automake-based project and want to make it easy to generate a disk image off the development tree with the version you are hacking on. Create a configuration file: # cat > mkosi.default <<EOF [Distribution] Distribution=fedora Release=24 [Output] Format=raw_btrfs Bootable=yes [Packages] # The packages to appear in both the build and the final image Packages=openssh-clients httpd # The packages to appear in the build image, but absent from the final image BuildPackages=make gcc libcurl-devel EOF  And let's add a build script: # cat > mkosi.build <<EOF #!/bin/sh cd$SRCDIR
./autogen.sh
./configure --prefix=/usr
make -j nproc
make install
EOF
# chmod +x mkosi.build


And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi


Let's try it out:

# systemd-nspawn -bi image.raw


Of course, if you do this you'll notice that building an image like this can be quite slow. And slow build times are actively hurtful to your productivity as a developer. Hence let's make things a bit faster. First, let's make use of a package cache shared between runs:

# mkdir mkosi.chache


Building images now should already be substantially faster (and generate less network traffic) as the packages will now be downloaded only once and reused. However, you'll notice that unpacking all those packages and the rest of the work is still quite slow. But mkosi can help you with that. Simply use mkosi's incremental build feature. In this mode mkosi will make a copy of the build and final images immediately before dropping in your build sources or artifacts, so that building an image becomes a lot quicker: instead of always starting totally from scratch a build will now reuse everything it can reuse from a previous run, and immediately begin with building your sources rather than the build image to build your sources in. To enable the incremental build feature use -i:

# mkosi -i


Note that if you use this option, the package list is not updated anymore from your distribution's servers, as the cached copy is made after all packages are installed, and hence until you actually delete the cached copy the distribution's network servers aren't contacted again and no RPMs or DEBs are downloaded. This means the distribution you use becomes "frozen in time" this way. (Which might be a bad thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you'll notice that it won't overwrite the generated image when it already exists. You can either delete the file yourself first (rm image.raw) or let mkosi do it for you right before building a new image, with mkosi -f. You can also tell mkosi to not only remove any such pre-existing images, but also remove any cached copies of the incremental feature, by using -f twice.

I wrote mkosi originally in order to test systemd, and quickly generate a disk image of various distributions with the most current systemd version from git, without all that affecting my host system. I regularly use mkosi for that today, in incremental mode. The two commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw


And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw


The latter I use only if I want to regenerate everything based on the very newest set of RPMs provided by Fedora, instead of a cached snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git tree: mkosi.default and mkosi.build. This way, any developer who wants to quickly test something with current systemd git, or wants to prepare a patch based on it and test it can check out the systemd repository and simply run mkosi in it and a few minutes later he has a bootable image he can test in systemd-nspawn or KVM. casync has similar files: mkosi.default, mkosi.build.

# Random Interesting Features

1. As mentioned already, mkosi will generate dm-verity enabled disk images if you ask for it. For that use the --verity switch on the command line or Verity= setting in mkosi.default. Of course, dm-verity implies that the root volume is read-only. In this mode the top-level dm-verity hash will be placed along-side the output disk image in a file named the same way, but with the .roothash suffix. If the image is to be created bootable, the root hash is also included on the kernel command line in the roothash= parameter, which current systemd versions can use to both find and activate the root partition in a dm-verity protected way. BTW: it's a good idea to combine this dm-verity mode with the raw_squashfs image mode, to generate a genuinely protected, compressed image suitable for running in your IoT device.

2. As indicated above, mkosi can automatically create a check-sum file SHA256SUMS for you (--checksum) covering all the files it outputs (which could be the image file itself, a matching .nspawn file using the mkosi.nspawn file mentioned above, as well as the .roothash file for the dm-verity root hash.) It can then optionally sign this with gpg (--sign). Note that systemd's machinectl pull-tar and machinectl pull-raw command can download these files and the SHA256SUMS file automatically and verify things on download. With other words: what mkosi outputs is perfectly ready for downloads using these two systemd commands.

3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To make use of that, place your X.509 key pair in two files mkosi.secureboot.crt and mkosi.secureboot.key, and set SecureBoot= or --secure-boot. If so, mkosi will sign the kernel/initrd/kernel command line combination during the build. Of course, if you use this mode, you should also use Verity=/--verity=, otherwise the setup makes only partial sense. Note that mkosi will not help you with actually enrolling the keys you use in your UEFI BIOS.

4. mkosi has minimal support for GIT checkouts: when it recognizes it is run in a git checkout and you use the mkosi.build script stuff, the source tree will be copied into the build image, but will all files excluded by .gitignore removed.

5. There's support for encryption in place. Use --encrypt= or Encrypt=. Note that the UEFI ESP is never encrypted though, and the root partition only if explicitly requested. The /home and /srv partitions are unconditionally encrypted if that's enabled.

6. Images may be built with all documentation removed.

7. The password for the root user and additional kernel command line arguments may be configured for the image to generate.

# Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies, listed in the README. Most notably you need a somewhat recent systemd version to make use of its full feature set: systemd 233. Older versions are already packaged for various distributions, but much of what I describe above is only available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn't available in Fedora, but there's a COPR.

# Future

It is my intention to continue turning mkosi into a tool suitable for:

1. Testing and debugging projects
2. Building images for secure devices
3. Building portable service images
4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and systemd/sd-boot native support for A/B IoT style partition setups. The idea is that the combination of systemd, casync and mkosi provides generic building blocks for building secure, auto-updating devices in a generic way from, even though all pieces may be used individually, too.

# FAQ

1. Why are you reinventing the wheel again? This is exactly like $SOMEOTHERPROJECT! — Well, to my knowledge there's no tool that integrates this nicely with your project's development tree, and can do dm-verity and UEFI SecureBoot and all that stuff for you. So nope, I don't think this exactly like $SOMEOTHERPROJECT, thank you very much.

2. What about creating MBR/DOS partition images? — That's really out of focus to me. This is an exercise in figuring out how generic OSes and devices in the future should be built and an attempt to commoditize OS image building. And no, the future doesn't speak MBR, sorry. That said, I'd be quite interested in adding support for booting on Raspberry Pi, possibly using a hybrid approach, i.e. using a GPT disk label, but arranging things in a way that the Raspberry Pi boot protocol (which is built around DOS partition tables), can still work.

3. Is this portable? — Well, depends what you mean by portable. No, this tool runs on Linux only, and as it uses systemd-nspawn during the build process it doesn't run on non-systemd systems either. But then again, you should be able to create images for any architecture you like with it, but of course if you want the image bootable on bare-metal systems only systems doing UEFI are supported (but systemd-nspawn should still work fine on them).

4. Where can I get this stuff? — Try GitHub. And some distributions carry packaged versions, but I think none of them the current v3 yet.

5. Is this a systemd project? — Yes, it's hosted under the systemd GitHub umbrella. And yes, during run-time systemd-nspawn in a current version is required. But no, the code-bases are separate otherwise, already because systemd is a C project, and mkosi Python.

6. Requiring systemd 233 is a pretty steep requirement, no? — Yes, but the feature we need kind of matters (systemd-nspawn's --overlay= switch), and again, this isn't supposed to be a tool for legacy systems.

7. Can I run the resulting images in LXC or Docker? — Humm, I am not an LXC nor Docker guy. If you select directory or subvolume as image type, LXC should be able to boot the generated images just fine, but I didn't try. Last time I looked, Docker doesn't permit running proper init systems as PID 1 inside the container, as they define their own run-time without intention to emulate a proper system. Hence, no I don't think it will work, at least not with an unpatched Docker version. That said, again, don't ask me questions about Docker, it's not precisely my area of expertise, and quite frankly I am not a fan. To my knowledge neither LXC nor Docker are able to run containers directly off GPT disk images, hence the various raw_xyz image types are definitely not compatible with either. That means if you want to generate a single raw disk image that can be booted unmodified both in a container and on bare-metal, then systemd-nspawn is the container manager to go for (specifically, its -i/--image= switch).

# Should you care? Is this a tool for you?

Well, that's up to you really.

If you hack on some complex project and need a quick way to compile and run your project on a specific current Linux distribution, then mkosi is an excellent way to do that. Simply drop the mkosi.default and mkosi.build files in your git tree and everything will be easy. (And of course, as indicated above: if the project you are hacking on happens to be called systemd or casync be aware that those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great choice too, as it will make it reasonably easy to generate secure images that are protected against offline modification, by using dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a VM or systemd-nspawn container, or a portable service then mkosi is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd init systems, old VM managers, Docker, … then no, mkosi is not for you, but there are plenty of well-established alternatives around that cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to accept your patches and other contributions.

Oh, and one unrelated last thing: don't forget to submit your talk proposal and/or buy a ticket for All Systems Go! 2017 in Berlin — the conference where things like systemd, casync and mkosi are discussed, along with a variety of other Linux userspace projects used for building systems.

### Audio – Stefan Westerfeld's blog

#### 27.06.2016 beast-0.11.0 released

Beast is a music composition and modular synthesis application. beast-0.11.0 is now available at beast.testbit.eu. Support for Soundfont (.sf2) files has been added. On multicore CPUs, Beast now uses all cores for synthesis, which improves performance. Debian packages also have been added, so installation should be very easy on Debian-like systems. And as always, lots of other improvements and bug fixes went into Beast.

Update: I made a screencast of Beast which shows the basics.

### autostatic.com

#### RPi 3 and the real time kernel

As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post.

So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

And the RPi seems to run stable with acceptable latencies:

Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops

So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though:

cyclictest -Sm -p 80 -n -i 500 -l 300000

And I ran hackbench in the background to create some load on the RPi:

(while true; do hackbench > /dev/null; done) &

Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3.

First you’ll need to git clone the Raspberry Pi kernel repository:

git clone -b 'rpi-4.9.y' --depth 1 https://github.com/raspberrypi/linux.git

This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi:

git clone https://github.com/raspberrypi/tools.git

This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
export CONCURRENCY_LEVEL=$(nproc) The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make). Now that the environment variables are set we can create the initial kernel config: cd linux make bcm2709_defconfig This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it: cd .. wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.33-rt23.patch.xz cd linux xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1 Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it: cd .. wget https://www.osadl.org/monitoring/patches/rbs3s/usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch cd linux patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-ru You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch: patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution. With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages: make clean scripts/config --disable DEBUG_INFO make -j$(nproc) deb-pkg

Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it:

kernel=vmlinuz-4.9.33-rt23-v7+

Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine.

The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK.

The post RPi 3 and the real time kernel appeared first on autostatic.com.

## June 21, 2017

### rncbc.org

#### Vee One Suite 0.8.3 - A Summer'17 release

Howdy!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are into a hot Summer'17 release!

Still available in dual form:

• a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
• a LV2 instrument plug-in.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go again!

## synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.3 (summer'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.

website:
http://synthv1.sourceforge.net

http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

## samplv1 - an old-school polyphonic sampler

samplv1 0.8.3 (summer'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.
• Play (current sample) menu item has been added to sample display right-click context-menu as for triggering it as an internal MIDI note-on/off event.

website:
http://samplv1.sourceforge.net

http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

## drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.3 (summer'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:

change-log:

• Added StartupWMClass entry to desktop file.
• Long overdue, some brand new and fundamental icons revamp.
• Left-clicking on each element fake-LED now triggers it as an internal MIDI note-on/off event. Play (current element) menu item has been also added to the the element list and sample display right-click context-menu.
http://drumkv1.sourceforge.net

http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Enjoy && have fun ;)

## June 20, 2017

### Audio – Stefan Westerfeld's blog

#### 20.06.2017 spectmorph-0.3.3 released

A new version of SpectMorph, my audio morphing software is now available on www.spectmorph.org. The main improvement is that SpectMorph supports now portamento and vibrato. For VST hosts with MPE (Bitwig), the pitch of each note can be controlled by the sequencer. So sliding from a C major chord to a D minor chord is possible. There is also a new portamento/mono mode, which should work with any host.

### GStreamer News

#### GStreamer 1.12.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## June 19, 2017

### Pid Eins

#### All Systems Go! 2017 CfP Open

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

• Low-level container executors and infrastructure
• IoT and embedded OS infrastructure
• OS, container, IoT image delivery and updating
• Building Linux devices and applications
• Low-level desktop technologies
• Networking
• System and service management
• Tracing and performance measuring
• IPC and RPC systems
• Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

# Introducing casync

In the past months I have been working on a new project: casync. casync takes inspiration from the popular rsync file synchronization tool as well as the probably even more popular git revision control system. It combines the idea of the rsync algorithm with the idea of git-style content-addressable file systems, and creates a new system for efficiently storing and delivering file system images, optimized for high-frequency update cycles over the Internet. Its current focus is on delivering IoT, container, VM, application, portable service or OS images, but I hope to extend it later in a generic fashion to become useful for backups and home directory synchronization as well (but more about that later).

The basic technological building blocks casync is built from are neither new nor particularly innovative (at least not anymore), however the way casync combines them is different from existing tools, and that's what makes it useful for a variety of use-cases that other tools can't cover that well.

# Why?

I created casync after studying how today's popular tools store and deliver file system images. To briefly name a few: Docker has a layered tarball approach, OSTree serves the individual files directly via HTTP and maintains packed deltas to speed up updates, while other systems operate on the block layer and place raw squashfs images (or other archival file systems, such as IS09660) for download on HTTP shares (in the better cases combined with zsync data).

Neither of these approaches appeared fully convincing to me when used in high-frequency update cycle systems. In such systems, it is important to optimize towards a couple of goals:

1. Most importantly, make updates cheap traffic-wise (for this most tools use image deltas of some form)
2. Put boundaries on disk space usage on servers (keeping deltas between all version combinations clients might want to run updates between, would suggest keeping an exponentially growing amount of deltas on servers)
3. Put boundaries on disk space usage on clients
4. Be friendly to Content Delivery Networks (CDNs), i.e. serve neither too many small nor too many overly large files, and only require the most basic form of HTTP. Provide the repository administrator with high-level knobs to tune the average file size delivered.
5. Simplicity to use for users, repository administrators and developers

I don't think any of the tools mentioned above are really good on more than a small subset of these points.

Specifically: Docker's layered tarball approach dumps the "delta" question onto the feet of the image creators: the best way to make your image downloads minimal is basing your work on an existing image clients might already have, and inherit its resources, maintaining full history. Here, revision control (a tool for the developer) is intermingled with update management (a concept for optimizing production delivery). As container histories grow individual deltas are likely to stay small, but on the other hand a brand-new deployment usually requires downloading the full history onto the deployment system, even though there's no use for it there, and likely requires substantially more disk space and download sizes.

OSTree's serving of individual files is unfriendly to CDNs (as many small files in file trees cause an explosion of HTTP GET requests). To counter that OSTree supports placing pre-calculated delta images between selected revisions on the delivery servers, which means a certain amount of revision management, that leaks into the clients.

Delivering direct squashfs (or other file system) images is almost beautifully simple, but of course means every update requires a full download of the newest image, which is both bad for disk usage and generated traffic. Enhancing it with zsync makes this a much better option, as it can reduce generated traffic substantially at very little cost of history/meta-data (no explicit deltas between a large number of versions need to be prepared server side). On the other hand server requirements in disk space and functionality (HTTP Range requests) are minus points for the use-case I am interested in.

(Note: all the mentioned systems have great properties, and it's not my intention to badmouth them. They only point I am trying to make is that for the use case I care about — file system image delivery with high high frequency update-cycles — each system comes with certain drawbacks.)

# Security & Reproducibility

Besides the issues pointed out above I wasn't happy with the security and reproducibility properties of these systems. In today's world where security breaches involving hacking and breaking into connected systems happen every day, an image delivery system that cannot make strong guarantees regarding data integrity is out of date. Specifically, the tarball format is famously nondeterministic: the very same file tree can result in any number of different valid serializations depending on the tool used, its version and the underlying OS and file system. Some tar implementations attempt to correct that by guaranteeing that each file tree maps to exactly one valid serialization, but such a property is always only specific to the tool used. I strongly believe that any good update system must guarantee on every single link of the chain that there's only one valid representation of the data to deliver, that can easily be verified.

# What casync Is

So much about the background why I created casync. Now, let's have a look what casync actually is like, and what it does. Here's the brief technical overview:

Encoding: Let's take a large linear data stream, split it into variable-sized chunks (the size of each being a function of the chunk's contents), and store these chunks in individual, compressed files in some directory, each file named after a strong hash value of its contents, so that the hash value may be used to as key for retrieving the full chunk data. Let's call this directory a "chunk store". At the same time, generate a "chunk index" file that lists these chunk hash values plus their respective chunk sizes in a simple linear array. The chunking algorithm is supposed to create variable, but similarly sized chunks from the data stream, and do so in a way that the same data results in the same chunks even if placed at varying offsets. For more information see this blog story.

Decoding: Let's take the chunk index file, and reassemble the large linear data stream by concatenating the uncompressed chunks retrieved from the chunk store, keyed by the listed chunk hash values.

As an extra twist, we introduce a well-defined, reproducible, random-access serialization format for file trees (think: a more modern tar), to permit efficient, stable storage of complete file trees in the system, simply by serializing them and then passing them into the encoding step explained above.

Finally, let's put all this on the network: for each image you want to deliver, generate a chunk index file and place it on an HTTP server. Do the same with the chunk store, and share it between the various index files you intend to deliver.

Why bother with all of this? Streams with similar contents will result in mostly the same chunk files in the chunk store. This means it is very efficient to store many related versions of a data stream in the same chunk store, thus minimizing disk usage. Moreover, when transferring linear data streams chunks already known on the receiving side can be made use of, thus minimizing network traffic.

Why is this different from rsync or OSTree, or similar tools? Well, one major difference between casync and those tools is that we remove file boundaries before chunking things up. This means that small files are lumped together with their siblings and large files are chopped into pieces, which permits us to recognize similarities in files and directories beyond file boundaries, and makes sure our chunk sizes are pretty evenly distributed, without the file boundaries affecting them.

The "chunking" algorithm is based on a the buzhash rolling hash function. SHA256 is used as strong hash function to generate digests of the chunks. xz is used to compress the individual chunks.

Here's a diagram, hopefully explaining a bit how the encoding process works, wasn't it for my crappy drawing skills:

The diagram shows the encoding process from top to bottom. It starts with a block device or a file tree, which is then serialized and chunked up into variable sized blocks. The compressed chunks are then placed in the chunk store, while a chunk index file is written listing the chunk hashes in order. (The original SVG of this graphic may be found here.)

# Details

Note that casync operates on two different layers, depending on the use-case of the user:

1. You may use it on the block layer. In this case the raw block data on disk is taken as-is, read directly from the block device, split into chunks as described above, compressed, stored and delivered.

2. You may use it on the file system layer. In this case, the file tree serialization format mentioned above comes into play: the file tree is serialized depth-first (much like tar would do it) and then split into chunks, compressed, stored and delivered.

The fact that it may be used on both the block and file system layer opens it up for a variety of different use-cases. In the VM and IoT ecosystems shipping images as block-level serializations is more common, while in the container and application world file-system-level serializations are more typically used.

Chunk index files referring to block-layer serializations carry the .caibx suffix, while chunk index files referring to file system serializations carry the .caidx suffix. Note that you may also use casync as direct tar replacement, i.e. without the chunking, just generating the plain linear file tree serialization. Such files carry the .catar suffix. Internally .caibx are identical to .caidx files, the only difference is semantical: .caidx files describe a .catar file, while .caibx files may describe any other blob. Finally, chunk stores are directories carrying the .castr suffix.

# Features

Here are a couple of other features casync has:

1. When downloading a new image you may use casync's --seed= feature: each block device, file, or directory specified is processed using the same chunking logic described above, and is used as preferred source when putting together the downloaded image locally, avoiding network transfer of it. This of course is useful whenever updating an image: simply specify one or more old versions as seed and only download the chunks that truly changed since then. Note that using seeds requires no history relationship between seed and the new image to download. This has major benefits: you can even use it to speed up downloads of relatively foreign and unrelated data. For example, when downloading a container image built using Ubuntu you can use your Fedora host OS tree in /usr as seed, and casync will automatically use whatever it can from that tree, for example timezone and locale data that tends to be identical between distributions. Example: casync extract http://example.com/myimage.caibx --seed=/dev/sda1 /dev/sda2. This will place the block-layer image described by the indicated URL in the /dev/sda2 partition, using the existing /dev/sda1 data as seeding source. An invocation like this could be typically used by IoT systems with an A/B partition setup. Example 2: casync extract http://example.com/mycontainer-v3.caidx --seed=/srv/container-v1 --seed=/srv/container-v2 /src/container-v3, is very similar but operates on the file system layer, and uses two old container versions to seed the new version.

2. When operating on the file system level, the user has fine-grained control on the meta-data included in the serialization. This is relevant since different use-cases tend to require a different set of saved/restored meta-data. For example, when shipping OS images, file access bits/ACLs and ownership matter, while file modification times hurt. When doing personal backups OTOH file ownership matters little but file modification times are important. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the meta-data cannot be replayed in full. Due to this, casync provides a set of --with= and --without= parameters that allow fine-grained control of the data stored in the file tree serialization, including the granularity of modification times and more. The precise set of selected meta-data features is also always part of the serialization, so that seeding can work correctly and automatically.

3. casync tries to be as accurate as possible when storing file system meta-data. This means that besides the usual baseline of file meta-data (file ownership and access bits), and more advanced features (extended attributes, ACLs, file capabilities) a number of more exotic data is stored as well, including Linux chattr(1) file attributes, as well as FAT file attributes (you may wonder why the latter? — EFI is FAT, and /efi is part of the comprehensive serialization of any host). In the future I intend to extend this further, for example storing btrfs sub-volume information where available. Note that as described above every single type of meta-data may be turned off and on individually, hence if you don't need FAT file bits (and I figure it's pretty likely you don't), then they won't be stored.

4. The user creating .caidx or .caibx files may control the desired average chunk length (before compression) freely, using the --chunk-size= parameter. Smaller chunks increase the number of generated files in the chunk store and increase HTTP GET load on the server, but also ensure that sharing between similar images is improved, as identical patterns in the images stored are more likely to be recognized. By default casync will use a 64K average chunk size. Tweaking this can be particularly useful when adapting the system to specific CDNs, or when delivering compressed disk images such as squashfs (see below).

5. Emphasis is placed on making all invocations reproducible, well-defined and strictly deterministic. As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other use-cases. For example, the casync digest command may be used to calculate a hash value identifying a specific directory in all desired detail (use --with= and --without to pick the desired detail). Moreover the casync mtree command may be used to generate a BSD mtree(5) compatible manifest of a directory tree, .caidx or .catar file.

6. The file system serialization format is nicely composable. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file sub-trees located at the top of the tree, with zero meta-data references from any of these serializations into the others. This property is essential to ensure maximum reuse of chunks when similar trees are serialized.

7. When extracting file trees or disk image files, casync will automatically create reflinks from any specified seeds if the underlying file system supports it (such as btrfs, ocfs, and future xfs). After all, instead of copying the desired data from the seed, we can just tell the file system to link up the relevant blocks. This works both when extracting .caidx and .caibx files — the latter of course only when the extracted disk image is placed in a regular raw image file on disk, rather than directly on a plain block device, as plain block devices do not know the concept of reflinks.

8. Optionally, when extracting file trees, casync can create traditional UNIX hard-links for identical files in specified seeds (--hardlink=yes). This works on all UNIX file systems, and can save substantial amounts of disk space. However, this only works for very specific use-cases where disk images are considered read-only after extraction, as any changes made to one tree will propagate to all other trees sharing the same hard-linked files, as that's the nature of hard-links. In this mode, casync exposes OSTree-like behavior, which is built heavily around read-only hard-link trees.

9. casync tries to be smart when choosing what to include in file system images. Implicitly, file systems such as procfs and sysfs are excluded from serialization, as they expose API objects, not real files. Moreover, the "nodump" (+d) chattr(1) flag is honored by default, permitting users to mark files to exclude from serialization.

10. When creating and extracting file trees casync may apply an automatic or explicit UID/GID shift. This is particularly useful when transferring container image for use with Linux user name-spacing.

11. In addition to local operation, casync currently supports HTTP, HTTPS, FTP and ssh natively for downloading chunk index files and chunks (the ssh mode requires installing casync on the remote host, though, but an sftp mode not requiring that should be easy to add). When creating index files or chunks, only ssh is supported as remote back-end.

12. When operating on block-layer images, you may expose locally or remotely stored images as local block devices. Example: casync mkdev http://example.com/myimage.caibx exposes the disk image described by the indicated URL as local block device in /dev, which you then may use the usual block device tools on, such as mount or fdisk (only read-only though). Chunks are downloaded on access with high priority, and at low priority when idle in the background. Note that in this mode, casync also plays a role similar to "dm-verity", as all blocks are validated against the strong digests in the chunk index file before passing them on to the kernel's block layer. This feature is implemented though Linux' NBD kernel facility.

13. Similar, when operating on file-system-layer images, you may mount locally or remotely stored images as regular file systems. Example: casync mount http://example.com/mytree.caidx /srv/mytree mounts the file tree image described by the indicated URL as a local directory /srv/mytree. This feature is implemented though Linux' FUSE kernel facility. Note that special care is taken that the images exposed this way can be packed up again with casync make and are guaranteed to return the bit-by-bit exact same serialization again that it was mounted from. No data is lost or changed while passing things through FUSE (OK, strictly speaking this is a lie, we do lose ACLs, but that's hopefully just a temporary gap to be fixed soon).

14. In IoT A/B fixed size partition setups the file systems placed in the two partitions are usually much shorter than the partition size, in order to keep some room for later, larger updates. casync is able to analyze the super-block of a number of common file systems in order to determine the actual size of a file system stored on a block device, so that writing a file system to such a partition and reading it back again will result in reproducible data. Moreover this speeds up the seeding process, as there's little point in seeding the white-space after the file system within the partition.

# Example Command Lines

Here's how to use casync, explained with a few examples:

$casync make foobar.caidx /some/directory  This will create a chunk index file foobar.caidx in the local directory, and populate the chunk store directory default.castr located next to it with the chunks of the serialization (you can change the name for the store directory with --store= if you like). This command operates on the file-system level. A similar command operating on the block level: $ casync make foobar.caibx /dev/sda1


This command creates a chunk index file foobar.caibx in the local directory describing the current contents of the /dev/sda1 block device, and populates default.castr in the same way as above. Note that you may as well read a raw disk image from a file instead of a block device:

$casync make foobar.caibx myimage.raw  To reconstruct the original file tree from the .caidx file and the chunk store of the first command, use: $ casync extract foobar.caidx /some/other/directory


And similar for the block-layer version:

$casync extract foobar.caibx /dev/sdb1  or, to extract the block-layer version into a raw disk image: $ casync extract foobar.caibx myotherimage.raw


The above are the most basic commands, operating on local data only. Now let's make this more interesting, and reference remote resources:

$casync extract http://example.com/images/foobar.caidx /some/other/directory  This extracts the specified .caidx onto a local directory. This of course assumes that foobar.caidx was uploaded to the HTTP server in the first place, along with the chunk store. You can use any command you like to accomplish that, for example scp or rsync. Alternatively, you can let casync do this directly when generating the chunk index: $ casync make ssh.example.com:images/foobar.caidx /some/directory


This will use ssh to connect to the ssh.example.com server, and then places the .caidx file and the chunks on it. Note that this mode of operation is "smart": this scheme will only upload chunks currently missing on the server side, and not re-transmit what already is available.

Note that you can always configure the precise path or URL of the chunk store via the --store= option. If you do not do that, then the store path is automatically derived from the path or URL: the last component of the path or URL is replaced by default.castr.

Of course, when extracting .caidx or .caibx files from remote sources, using a local seed is advisable:

$casync extract http://example.com/images/foobar.caidx --seed=/some/exising/directory /some/other/directory  Or on the block layer: $ casync extract http://example.com/images/foobar.caibx --seed=/dev/sda1 /dev/sdb2


When creating chunk indexes on the file system layer casync will by default store meta-data as accurately as possible. Let's create a chunk index with reduced meta-data:

$casync make foobar.caidx --with=sec-time --with=symlinks --with=read-only /some/dir  This command will create a chunk index for a file tree serialization that has three features above the absolute baseline supported: 1s granularity time-stamps, symbolic links and a single read-only bit. In this mode, all the other meta-data bits are not stored, including nanosecond time-stamps, full UNIX permission bits, file ownership or even ACLs or extended attributes. Now let's make a .caidx file available locally as a mounted file system, without extracting it: $ casync mount http://example.comf/images/foobar.caidx /mnt/foobar


And similar, let's make a .caibx file available locally as a block device:

$casync mkdev http://example.comf/images/foobar.caibx  This will create a block device in /dev and print the used device node path to STDOUT. As mentioned, casync is big about reproducibility. Let's make use of that to calculate the a digest identifying a very specific version of a file tree: $ casync digest .


This digest will include all meta-data bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what meta-data to include:

$casync digest --with=unix .  This makes use of the --with=unix shortcut for selecting meta-data fields. Specifying --with-unix= selects all meta-data that traditional UNIX file systems support. It is a shortcut for writing out: --with=16bit-uids --with=permissions --with=sec-time --with=symlinks --with=device-nodes --with=fifos --with=sockets. Note that when calculating digests or creating chunk indexes you may also use the negative --without= option to remove specific features but start from the most precise: $ casync digest --without=flag-immutable


This generates a digest with the most accurate meta-data, but leaves one feature out: chattr(1)'s immutable (+i) file flag.

To list the contents of a .caidx file use a command like the following:

$casync list http://example.com/images/foobar.caidx  or $ casync mtree http://example.com/images/foobar.caidx


The former command will generate a brief list of files and directories, not too different from tar t or ls -al in its output. The latter command will generate a BSD mtree(5) compatible manifest. Note that casync actually stores substantially more file meta-data than mtree files can express, though.

# What casync isn't

1. casync is not an attempt to minimize serialization and downloaded deltas to the extreme. Instead, the tool is supposed to find a good middle ground, that is good on traffic and disk space, but not at the price of convenience or requiring explicit revision control. If you care about updates that are absolutely minimal, there are binary delta systems around that might be an option for you, such as Google's Courgette.

2. casync is not a replacement for rsync, or git or zsync or anything like that. They have very different use-cases and semantics. For example, rsync permits you to directly synchronize two file trees remotely. casync just cannot do that, and it is unlikely it every will.

# Where next?

casync is supposed to be a generic synchronization tool. Its primary focus for now is delivery of OS images, but I'd like to make it useful for a couple other use-cases, too. Specifically:

1. To make the tool useful for backups, encryption is missing. I have pretty concrete plans how to add that. When implemented, the tool might become an alternative to restic, BorgBackup or tarsnap.

2. Right now, if you want to deploy casync in real-life, you still need to validate the downloaded .caidx or .caibx file yourself, for example with some gpg signature. It is my intention to integrate with gpg in a minimal way so that signing and verifying chunk index files is done automatically.

3. In the longer run, I'd like to build an automatic synchronizer for $HOME between systems from this. Each $HOME instance would be stored automatically in regular intervals in the cloud using casync, and conflicts would be resolved locally.

4. casync is written in a shared library style, but it is not yet built as one. Specifically this means that almost all of casync's functionality is supposed to be available as C API soon, and applications can process casync files on every level. It is my intention to make this library useful enough so that it will be easy to write a module for GNOME's gvfs subsystem in order to make remote or local .caidx files directly available to applications (as an alternative to casync mount). In fact the idea is to make this all flexible enough that even the remoting back-ends can be replaced easily, for example to replace casync's default HTTP/HTTPS back-ends built on CURL with GNOME's own HTTP implementation, in order to share cookies, certificates, … There's also an alternative method to integrate with casync in place already: simply invoke casync as a sub-process. casync will inform you about a certain set of state changes using a mechanism compatible with sd_notify(3). In future it will also propagate progress data this way and more.

5. I intend to a add a new seeding back-end that sources chunks from the local network. After downloading the new .caidx file off the Internet casync would then search for the listed chunks on the local network first before retrieving them from the Internet. This should speed things up on all installations that have multiple similar systems deployed in the same network.

Further plans are listed tersely in the TODO file.

# FAQ:

1. Is this a systemd project?casync is hosted under the github systemd umbrella, and the projects share the same coding style. However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

2. Is casync portable? — At the moment: no. I only run Linux and that's what I code for. That said, I am open to accepting portability patches (unlike for systemd, which doesn't really make sense on non-Linux systems), as long as they don't interfere too much with the way casync works. Specifically this means that I am not too enthusiastic about merging portability patches for OSes lacking the openat(2) family of APIs.

3. Does casync require reflink-capable file systems to work, such as btrfs? — No it doesn't. The reflink magic in casync is employed when the file system permits it, and it's good to have it, but it's not a requirement, and casync will implicitly fall back to copying when it isn't available. Note that casync supports a number of file system features on a variety of file systems that aren't available everywhere, for example FAT's system/hidden file flags or xfs's projinherit file flag.

4. Is casync stable? — I just tagged the first, initial release. While I have been working on it since quite some time and it is quite featureful, this is the first time I advertise it publicly, and it hence received very little testing outside of its own test suite. I am also not fully ready to commit to the stability of the current serialization or chunk index format. I don't see any breakages coming for it though. casync is pretty light on documentation right now, and does not even have a man page. I also intend to correct that soon.

5. Are the .caidx/.caibx and .catar file formats open and documented?casync is Open Source, so if you want to know the precise format, have a look at the sources for now. It's definitely my intention to add comprehensive docs for both formats however. Don't forget this is just the initial version right now.

6. casync is just like $SOMEOTHERTOOL! Why are you reinventing the wheel (again)? — Well, because casync isn't "just like" some other tool. I am pretty sure I did my homework, and that there is no tool just like casync right now. The tools coming closest are probably rsync, zsync, tarsnap, restic, but they are quite different beasts each. 7. Why did you invent your own serialization format for file trees? Why don't you just use tar? — That's a good question, and other systems — most prominently tarsnap — do that. However, as mentioned above tar doesn't enforce reproducibility. It also doesn't really do random access: if you want to access some specific file you need to read every single byte stored before it in the tar archive to find it, which is of course very expensive. The serialization casync implements places a focus on reproducibility, random access, and meta-data control. Much like traditional tar it can still be generated and extracted in a stream fashion though. 8. Does casync save/restore SELinux/SMACK file labels? — At the moment not. That's not because I wouldn't want it to, but simply because I am not a guru of either of these systems, and didn't want to implement something I do not fully grok nor can test. If you look at the sources you'll find that there's already some definitions in place that keep room for them though. I'd be delighted to accept a patch implementing this fully. 9. What about delivering squashfs images? How well does chunking work on compressed serializations? – That's a very good point! Usually, if you apply the a chunking algorithm to a compressed data stream (let's say a tar.gz file), then changing a single bit at the front will propagate into the entire remainder of the file, so that minimal changes will explode into major changes. Thankfully this doesn't apply that strictly to squashfs images, as it provides random access to files and directories and thus breaks up the compression streams in regular intervals to make seeking easy. This fact is beneficial for systems employing chunking, such as casync as this means single bit changes might affect their vicinity but will not explode in an unbounded fashion. In order achieve best results when delivering squashfs images through casync the block sizes of squashfs and the chunks sizes of casync should be matched up (using casync's --chunk-size= option). How precisely to choose both values is left a research subject for the user, for now. 10. What does the name casync mean? – It's a synchronizing tool, hence the -sync suffix, following rsync's naming. It makes use of the content-addressable concept of git hence the ca- prefix. 11. Where can I get this stuff? Is it already packaged? – Check out the sources on GitHub. I just tagged the first version. Martin Pitt has packaged casync for Ubuntu. There is also an ArchLinux package. Zbigniew Jędrzejewski-Szmek has prepared a Fedora RPM that hopefully will soon be included in the distribution. # Should you care? Is this a tool for you? Well, that's up to you really. If you are involved with projects that need to deliver IoT, VM, container, application or OS images, then maybe this is a great tool for you — but other options exist, some of which are linked above. Note that casync is an Open Source project: if it doesn't do exactly what you need, prepare a patch that adds what you need, and we'll consider it. If you are interested in the project and would like to talk about this in person, I'll be presenting casync soon at Kinvolk's Linux Technologies Meetup in Berlin, Germany. You are invited. I also intend to talk about it at All Systems Go!, also in Berlin. ## June 18, 2017 ### GStreamer News #### GStreamer 1.10.5 stable release (binaries) Pre-built binary images of the 1.10.5 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android. The builds are available for download from: Android, iOS, Mac OS X and Windows. ## June 17, 2017 ### KXStudio News #### DPF-Plugins v1.1 released With some minor things finally done and all reported bugs squashed, it's time to tag a new release of DPF-Plugins. The initial 1.0 version was not really advertised/publicized before, as there were still a few things I wanted done first - but they were already usable as-is. The base framework used by these plugins (DPF) will get some deep changes soon, so better to have this release out now. I will not write a changelog here, it was just many small changes here and there for all the plugins since v1.0. Just think of this release as the initial one. :P The source code plus Linux, macOS and Windows binaries can be downloaded at https://github.com/DISTRHO/DPF-Plugins/releases/tag/v1.1. The plugins are released as LADSPA, DSSI, LV2, VST2 and JACK standalone. As this is the first time I show off the plugins like this, let's go through them a little bit... The order shown is more or less the order in which they were made. Note that most plugins here were made/ported as a learning exercise, so not everything is new. Many thanks to António Saraiva for the design of some of these interfaces! ### Mini-Series This is a collection of small but useful plugins, based on the good old LOSER-Dev Plugins. This collection currently includes 3 Band EQ, 3 Band Splitter and Ping Pong Pan. ### MVerb Studio quality, open-source reverb. Its release was intended to provide a practical demonstration of Dattorro’s figure-of-eight reverb structure and provide the open source community with a high quality reverb. This is a DPF'ied build of the original MVerb plugin, allowing a proper Linux version with UI. ### Nekobi Simple single-oscillator synth based on the Roland TB-303. This is a DPF'ied build of the nekobee project, allowing LV2 and VST builds of the plugin, plus a nicer UI with a simple cat animation. ;) ### Kars Simple karplus-strong plucked string synth. This is a DPF'ied build of the karplong DSSI example synth, written by Chris Cannam. It implements the basic Karplus-Strong plucked-string synthesis algorithm (Kevin Karplus & Alex Strong, "Digital Synthesis of Plucked-String and Drum Timbres", Computer Music Journal 1983). ### ndc-Plugs DPF'ied ports of some plugins from Niall Moody. See http://www.niallmoody.com/ndcplugs/plugins.htm for the original author's page. This collection currently includes Amplitude Imposer, Cycle Shifter and Soul Force plugins. ### ProM projectM is an awesome music visualizer. This plugin makes it work as an audio plugin (LV2 and VST). ### glBars This is an OpenGL bars visualization plugin (as seen in XMMS and XBMC/Kodi). Adapted from the jack_glbars project by Nedko Arnaudov. ## June 15, 2017 ### ardour #### Ardour 5.10 released We are pleased to announce the availability of Ardour 5.10. This is primarily a bug-fix release, with several important fixes for recent selection/cut/copy/paste regressions along with fixes for many long standing issues large and small. This release also sees the arrival of VCA slave automation, along with improvements in overall VCA master/slave behaviour. There are also significant extensions to Ardour's OSC support. Read more below for the full list of features, improvements and fixes. read more ### GStreamer News #### GStreamer 1.10.5 stable release The GStreamer team is pleased to announce the fifth bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework! This release only contains bugfixes and it should be safe to update from 1.10.0. It is most likely the last release in the stable 1.10 release series See /releases/1.10/ for the full release notes. Binaries for Android, iOS, Mac OS X and Windows will be available shortly. Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx. ## June 14, 2017 ### blog4 #### new sound installation by Tina Madsen opens in Liebig1, Berlin Block 4 artist Tina Mariane Krogh Madsen presents her new sound installation in Berlin in Liebig12 this Thursday, the 15. June: http://www.liebig12.net/15-06-tina-mariane-krogh-madsenbody-resonance-sound-installation/ ## June 10, 2017 ### KXStudio News #### KXStudio 14.04.5 release and future plans Hello there, it's time for another KXStudio ISO release! KXStudio 14.04.5 is here! Lots have changed in the applications and plugins for Linux Audio (even in KXStudio itself), so it was about time to see those ISO images updated. Behind the scenes, from what the user can see, it might appear as nothing has truly changed. After all, this is an updated image still based on Ubuntu 14.04, like those from 2 years ago. But we had a really big amount of releases for our beloved software, enough to deserve this small ISO update. There is no list of changes this time, sorry. The main thing worth mentioning is that base system is exactly the same, with only applications and plugins updated. You know the saying - if ain't broken, don't fix it! Before you ask.. no, there won't be a 16.04 based ISO release. When 2016 started KDE5 was not in a good enough shape, and it would need a lot of work (and time) to port all the changes made for KDE4 into KDE5. KDE5 is a lot better now than it used to be, but we missed the opportunity there. The current plan is to slowly migrate everything we have into KDE5 (meta-packages, scripts, tweaks, artwork, etc) and do a new ISO release in May 2018. (Yes, this means using Ubuntu 18.04 as base) The choice of KDE Plasma as desktop environment is not set in stone, other (lighter) desktops have appeared recently that will be considered. In the end it depends if it will be stable and good enough for audio production. You can download the new ISOs on the KXStudio website, at http://kxstudio.linuxaudio.org/Downloads#LiveDVD. And that's it for now. We hope you enjoy KXStudio, being it the ISO "distribution" release or the repositories. ## June 09, 2017 ### Linux – CDM Create Digital Music #### Ableton have now made it easy for any developer to work with Push 2 You know Ableton Push 2 will work when it’s plugged into a computer and you’re running Ableton Live. You get bi-directional feedback on the lit pads and on the screen. But Ableton have also quietly made it possible for any developer to make Push 2 work – without even requiring drivers – on any software, on virtually any platform. And a new library is the final piece in making that easy. Even if you’re not a developer, that’s big news – because it means that you’ll likely see solutions for using Push 2 with more than just Ableton Live. That not only improves Push as an investment, but ensures that it doesn’t collect dust or turn into a paperweight when you’re using other software – now or down the road. And it could also mean you don’t always need a computer handy. Push 2 uses standards supported on every operating system, so this could mean operation with an iPad or a Raspberry Pi. That’s really what this post-PC thing is all about. The laptop still might be the best bang-for-your-buck equation in the studio, but maybe live you want something in the form of a stompbox, or something that goes on a music stand while you sing or play. If you are a developer, there are two basic pieces. First, there’s the Push Interface Description. This bit tells you how to take control of the hardware’s various interactions. https://github.com/Ableton/push-interface Now, it was already possible to write to the display, but it was a bit of work. Out this week is a simple C++ code library you can bootstrap, with example code to get you up and running. It’s built in JUCE, the tool of choice for a whole lot of developers, mobile and desktop alike. (Thanks, ROLI!) https://github.com/Ableton/push2-display-with-juce Marc Resibois created this example, but credit to Ableton for making this public. Here’s an example of what you can do, with Marc demonstrating on the Raspberry Pi: This kind of openness is still very much unusual in the hardware/software industry. (Novation’s open source Launchpad Pro firmware API is another example; it takes a different angle, in that you’re actually rewriting the interactions on the device. I’ll cover that soon.) But I think this is very much needed. Having hardware/software integration is great. Now it’s time to take the next step and make that interaction more accessible to users. Open ecosystems in music are unique in that they tend to encourage, rather than discourage sales. They increase the value of the gear we buy, and deepen the relationships makers have with users (manufacturers and independent makers alike). And these sorts of APIs also, ironically, force hardware developers to make their own iteration and revision easier. It’s also a great step in a series of steps forward on openness and interoperability from Ableton. Whereas the company started with relatively closed hardware APIs built around proprietary manufacturer relationships, Ableton Link and the Push API and other initiatives are making it easier for Live and Push users to make these tools their own. The post Ableton have now made it easy for any developer to work with Push 2 appeared first on CDM Create Digital Music. ## June 08, 2017 ### Linux – CDM Create Digital Music #### ROLI now make a$299, ultra-compact expressive keyboard

ROLI are filling out their mobile line of controllers, Blocks, with a two-octave keyboard – and that could change a lot. In addition to the wireless Bluetooth, battery-powered light-up X/Y pad and touch shortcuts, now you get something that looks like an instrument. The Seaboard Block is an ultra-mobile, expressive keyboard for your iOS gadget or computer, and it’s available for $299, including in Apple Stores. If you wanted a new-fangled “expressive” keyboard – a controller on which you can move your fingers into and around the keys for extra expression – ROLI already had one strong candidate. The Seaboard RISE is a beautiful, futuristic, slim device with a familiar key layout and a price of US$799. It’ll feel a bit weird playing a piano sound on it if you’re a keyboardist, since the soft, spongy keys will be new to you. But you’ll know where the notes are, and it’ll be responsive. Then, switch to any more unusual sound – synths, physical modeled instruments, and the like – and it becomes simply magical. Finally, you have a new physical interface for your new, unheard sounds.

For me, the RISE was already a sweet spot. But I’ll be honest, I can still imagine holding back because of the price. And it doesn’t fit in my backpack, or my easyJet-friendly rollaway.

Size and price matter. So the Seaboard Block, if it feels good, could really be the winner. And even if you passed up that X/Y pad and touch controller, you might take a second look at this one. (Plus, it makes those Blocks make way more sense.)

We’ll get one in to test when they ship later this month. But ROLI also promise a touch and feel similar to the RISE (if not quite as deep, since the Block is slimmer). I found the previous Blocks to be responsive, but not as expressive as the RISE – so that’s good news.

What you get is a two-octave keyboard in a small-but-playable minikey form factor, USB-C for charging and MIDI out, and connectors for snap-and-play use with other Blocks.

For those of you not familiar, the Seaboard line also include what ROLI somewhat confusingly call “5D Touch.” (“Help! I’m trapped in a tesseract and wound up in a wormhole to an evil dimension and now there’s a version of me with an agonizer telling me to pledge allegiance to the Terran Empire!”)

What this means in practical terms is, you can push your fingers into the keys and make something happen, or slide them up and down the surface of the keys and make something happen, or wiggle and bend between notes, or run your finger along a continuous touch strip below the keys and get glissandi. And that turns out to be really, really useful. Also, I can’t overstate this enough – if you have even basic keyboard skills, having a piano-style layout is enormously intuitive. (By the same token, the Linnstrument seems to make sense to people used to frets.)

Add an iPhone or iPad running iOS 9 or later, and you instantly can turn this into an instrument – no wires required. The free Noise app gives you tons of sounds to start with. That means this is probably the smallest, most satisfying jam-on-the-go instrument I can imagine – something you could fit into a purse, let alone a backpack, and use in a hotel room or on a bus without so much as a wire or power connection. (With ten hours battery life, I’m fairly certain the Seaboard Block will run out of battery later than my iPhone does).

Regular CDM readers probably will want it to do more than that for three hundred bucks. So, you do get compatibility with various other tools. Ableton Live, FXpansion Strobe2, Native Instruments Kontakt and Massive, Bitwig Studio, Apple Logic Pro (including the amazing Sculpture), Garageband, SampleModeling SWAM, and the crazy-rich Spectrasonics Omnisphere all work out of the box.

You can also develop your own tools with a rich open SDK and API. That includes some beautiful tools for Max/MSP. Not a Max owner? There’s even a free 3-month license included. (Dedicated tools for integrating the Seaboard Block are coming soon.)

The SDK actually to me makes this worth the investment – and worth the wait to see what people come up with. I’ll have a full story on the SDK soon, as I think this summer is the perfect time for it.

The Touch block, which previously seemed a bit superfluous, also now looks useful, as it gives you additional hands-on control of how the keyboard responds. That X/Y pad makes a nice combo, too. But my guess is, for most of us, you may drop those and just use the keyboard – and of course modularity allows you to do that.

ROLI aren’t without competition (somewhat amazingly, given these devices were once limited to experimental one-offs). The forthcoming JOUE, from the creator of the JazzMutant Lemur, is an inbound Kickstarter-backed product. And I have to say, it’s truly extraordinary – the touch sensitivity and precision is unmatched on the market. But there isn’t an obvious controller template or app combo to begin with, so it’s more a specialist device. The ROLI instrument works out of the box with an app, and will be in physical Apple Stores. And the ROLI has a specific, fixed playing style the JOUE doesn’t quite match. My guess is the two will be complementary, and there’s even reason for JOUE lovers to root for ROLI – because ROLI are developing the SDK, tools, instrument integration, and user base that could help other devices to succeed. (Think JOUE, Linnstrument, Madrona Labs Soundplane, not to mention the additions to the MIDI spec.)

Anyway, this is all big news – and coming on the heels of news of Ableton’s acquisition of Max/MSP, this week may prove a historical one. What was once the fringe experimentation of the academic community is making a real concerted entry into the musical mainstream. Now the only remaining question, and it’s a major one, is whether the weirdo stuff catches on. Well, you have a hand in that, too – weirdos, assemble!

https://roli.com/products/blocks/seaboard-block

The post ROLI now make a $299, ultra-compact expressive keyboard appeared first on CDM Create Digital Music. #### Arturia AudioFuse: all the connections, none of the hidden settings After a long wait, Arturia’s AudioFuse interface has arrived. And on paper, at least, it’s like audio interface wish fulfillment. What do you want in an interface? You want really reliable, low-latency audio. You want all the connections you need. (Emphasis on what you need, because that’s tricky – not everyone needs the same thing.) And you want to be able to access the settings without having to dive through menus or load an application. That last one has often been a sticking point. Even when you do find an interface with the right connections and solid driver reliability and performance, a lot of the time the stuff you change every day is buried in some hard-to-access menus, or even more likely, on some application you have to load on your computer and futz around with. And oh yeah — it’s €/$599. That’s aggressively competitive when you read the specs.

I requested one of these for review when I met with Arturia at Musikmesse in Frankfurt some weeks ago, so this isn’t a review – that’s coming. But here are some important specs.

### Connections

Basically, you get everything you need as a solo musician/producer – 4 outs (so you can do front/rear sound live, for instance), 4 ins, plus phono pre’s for turntables, two mic pres (not just one, as some boxes annoyingly have), and MIDI.

Plus, there’s direct monitoring, separate master / monitor mix channels (which is great for click tracks, cueing for DJs or live, and anything that requires a separate monitor mix, as well as tracking), and a lot of sync and digital options.

It’s funny, this is definitely on my must-have list, but it’s hard to find a box that does this without getting an expansive (and expensive) interface that may have more I/O than one person really needs.

This is enough for pretty much all the tracking applications one or two people recording will need, plus the monitoring options you need for various live, DJ, and studio needs, and A/B monitor switching you need in the studio. It also means as a soloist, you can eliminate a lot of gear – also important when you’re on the go.

Their full specs:

2 DiscretePRO microphone preamps
2 RIAA phono preamps
2x Mic/Instrument/Line (XLR / 1/4″ TRS)
2x Phono/Line (RCA / 1/4″ TRS)
4 analog outputs (1/4″ TRS)
S/PDIF in/out
Word clock in/out
MIDI in/out
24-bit next-generation A-D/D-A converters at up to 192kHz sampling rate
Talkback with dedicated built-in microphone (up to 96 kHz Sample Rate)
A/B speaker switching
Direct monitoring
Separate master and monitor mix channels
USB interface with PC, Mac, iOS, Android and Linux compatibility
3-port USB hub
3 models: Classic Silver, Space Grey, Deep Black
Aluminum chassis, hard leather-covered top cover

Arturia also promise high-end audio performance, to the tune of “dual state-of-the-art mic preamps with a class-leading >131dB A-weighted EIN rating.” I’ll try to test that with some people who are better engineers than I am when we get one in.

Also cute – a 3-port USB hub. So this could really cut down the amount of gear I pack.

Now, my only real gripe is, while USB improves compatibility, I’d love a Thunderbolt 3/USB-C version of this interface, especially as that becomes the norm on Mac and PC. Maybe that will come in the future; it’s not hard to imagine Arturia making two offerings if this box is a success. USB remains the lowest common denominator, and this is not a whole lot of simultaneous I/O, so USB makes some sense. (Thunderbolt should theoretically offer stable lower latency performance by allowing smaller buffer sizes.)

### And dedicated controls

This is a big one. You’ll read a lot of the above on specs, but then discover that audio interfaces make you launch a clumsy app on your PC or Mac and/or dive into menus to get into settings.

That’s doubly annoying in studio use where you don’t want to break flow. How many times have you been in the middle of a session and lost time and concentration because some setting somewhere wasn’t set the way you intended, and you couldn’t see it? (“Hey, why isn’t this recording?” “Why is this level wrong?” “Why can’t I hear anything?” “Ugh, where’s the setting on this app?” … are … things you may hear if you’re near me in a studio, sometimes peppered with less-than-family-friendly bonus words.)

So Arturia have made an interface that has loads of dedicated controls. Maybe it doesn’t have a sleek, scifi minimalist aesthetic as a result, but … who cares?

Onboard dedicated controls that don’t require menu diving include: talking mic, dedicated input controls, A/B monitor switching, and a dedicated level knob for headphones.

### And OS compatibility

This is the other thing – there are some great interfaces that lack support for Linux and mobile. So, for instance, if you want to rig up a custom Raspberry Pi for live use or something like that, this can double as the interface. Or you can use it with Android and iOS, which with increasingly powerful tablets starts to look viable, especially for mobile recording or stage use.

Arturia tell us performance, depending on your system, should be reliably in the territory of 4.5ms – well within what you’re likely to need, even for live (and you can still monitor direct). Some tests indicate performance as low as 3.5ms.

### Plus a nice case and cover

Here’s an idea that’s obviously a long time coming. The AudioFuse not only has an adorable small form factor and aluminum chassis, but there’s a cover for it. So no more damage and scratches or even breaking off knobs when you tote this thing around – that to me is an oddly huge “why doesn’t everyone do this” moment.

The lid has a doubly useful feature – it disables the controls when it’s on, so you can avoid bumping something onstage.

Dimensions:
69*126*126 mm.

Weight:
950 g

I’m very eager to get this in my hands. Stay tuned.

The post Arturia AudioFuse: all the connections, none of the hidden settings appeared first on CDM Create Digital Music.

## June 07, 2017

### MOD Devices Blog

#### MOD travels around the world – Part 2

Last year, Gianfranco wrote a post about the international events MOD Devices has attended and because there’s been a lot of activity recently and a lot more to come in the near future, we’re doing a Part Deux, with all the latest events recaps and news. Enjoy!

Ok, so we’re a music technology startup and these are three of the greatest words you can say whenever someone asks you “- and what do YOU do?” at an event. But we’re also part of the free/libre/open source software community, which is what makes us a bit of an exotic fish in certain environments. Yet this is what gives us our edge and the ability to try to change the game and provide a creative platform that empowers its users.

In every event we go, we’re constantly pitching and demonstrating the Duo (and, as of April, its new peripherals) to everyone we meet, and it’s interesting to see that each event has its own specificity, each crowd its expectations, each musician his or her own particular needs. As we have these conversations, we get some wonderful feedback, broaden the community and make some friends in the process. It’s both exhausting and really fascinating!

## Musikmesse 2017

Last April, we went back to Frankfurt and took part in the Musikmesse again. This time, we weren’t accompanied by the musical mastermind who thought of a world without musical instruments, but we had a great team composed of Pjotr, Jesse, Gian and myself. We were located in the electric guitar and relied on our beautiful Pedalboard Builder interface to lure the attendants to our booth. Also, Pjotr and Jesse’s trumpet and Circuit MOD jams were bound to get us some attention. At one point, they caught the eye of a French podcast crew and I ended up being interviewed for the great Les Sondiers channel (you can check it out below).

We made friends all around us but a special nod must go to luthier Jean-Luc Moscato and bass virtuoso Jeff Corallini who were right next to us. With his 7-string bass, he was always impressing everyone who walked by. Someone filmed a nice impromptu jam that happened at some point. Our own Pjotr Lasschuit got some trumpet action there as well:

With music booming everywhere, we were happy to explore some of the other (quieter) halls and check out the latest gear. I was particularly impressed by this super versatile MIDI wind instrument.

All in all, we got another great feeling of our place in this impressive and innovative industry and, like during NAMM earlier this year, we took another step forward in gathering momentum, creating some buzz and starting collaborations.

## LAC 2017

The Linux Audio Conference has been THE community event for us since our first time there in 2013. This year, it was held in Saint Etienne, co-organized by the GRAME from Lyon and the CIEREC from Saint Etienne’s Jean Monnet University. It’s always a great opportunity to meet, chat and have a drink or two with our community’s developers, enthusiasts and supporters.

This year, we held a workshop on the “Origins, features and roadmap of the MOD Duo” and were really thrilled with the dialogue it sparked.

There was also a very insightful keynote speech by Paul Davis, developer of JACK and Ardour among some other great achievements. He presented his view on the state of Linux Audio, open-source development in general and he even mentioned MOD Devices as an example of an open-source-based company striving to get proper marketing promotion (indeed we are!). I was also super excited about the music tutor developed by Marc Groenewegen from the Utrecht School of Music and Technology. We talked a little bit after his session and along with Robin Gareus we imagined how we could soon have a music tutor plugin for the Duo. You can check out these (and others’) talks on the Youtube channel of Université Jean Monnet here.

The evenings were filled with musical performances and our own Jeremy Jongepier, AKA AutoStatic, closed off the second night with a MOD-fueled concert. He totally owned the stage with his Duo, guitar and MIDI controllers, all the while downing a nice cold beer: very RocknRoll! The video for that is here and starts at around 2:40:00.

## Upcoming events

From attending these events we’ve come to realize that we’re really reconciling these two aspects – the investor-friendly and the idealistic FLOSS developer -, which isn’t always easy, but they’re actually two sides of the same coin. We’re looking to take the best from both worlds: bring some much needed investment and new business model to the FLOSS world and provide evolving and innovative devices based on FLOSS to the music market.

The next events we’ll attend are a perfect place to continue to position ourselves as a company with a different outlook and mindset on the musical effects game.

### Sónar+D MarketLab

Next week, we will be in Barcelona for a very exciting event. It will be our second participation at the Sónar+D after being selected as a finalist for the 2015 Startup Competition. We will have a booth at the MarketLab this time, which is, as the organizers put it, “a space where the creators of the year’s most outstanding technology initiatives present the projects that they have developed in creative labs, media labs, universities and businesses. A place for trying out innovations that explore new forms of creation, production and marketing, and which in turn fosters relationships between professionals in the creative industries and the general public”. Who knows, maybe Björk will come and test the Duo out…

### Les Ardentes Start-up Garden

In early July, we are headed to Liège, in Belgium, to be one of 30 startup at the Living Lab of the Wallifornia MusicTech that will be held during the Les Ardentes Music Festival. This will be another great opportunity to show the Duo to a broad audience, from musicians to investors. Good music and great conversations on the horizon, what more could we ask for?

That’s it for now, but there’ll be more on the next semester, for sure! And if any of you will be around in Spain or Belgium for our next two rendezvous, we’d love to see you, so drop us a line

## May 30, 2017

### blog4

#### Notstandskomitee concert video

The Notstandskomitee concert at Fraction Bruit #17, Loophole Berlin, 27.5.2017 with tracks from the new album The Golden Times:

## May 26, 2017

### blog4

#### The Golden Times are here

Block 4 released the new album by Notstandskomitee: The Golden Times

The Golden Times by Notstandskomitee

## May 23, 2017

### blog4

#### new Notstandskomitee album and Berlin concert

Block4 set the release date of the new Notstandskomitee album The Golden Times album to this Friday, the 26.May 2017! It will be released exclusively on Bandcamp on the usual address https://notstandskomitee.bandcamp.com
On Saturday the 27.May 2017 Malte Steiner will play a new Notstandskomitee set with new realtime visuals at the final Fraction Bruit event in Loophole, Berlin.

The Golden Times Are About To Come

#### Body Interfaces: 10.1.1 100 Continue

the performance Body Interfaces: 10.1.1 100 Continue of my better half Tina Mariane Krogh Madsen at Sofia Underground Performance Art Festival 28.4.2017

## May 19, 2017

### Libre Music Production - Articles, Tutorials and News

#### Paul Davis, Ardour and JACK creator/developer, talks at Linux Audio Conference 2017

The Linux Audio Conference 2017 is under way and this year, Ardour and JACK creator/developer, Paul Davis talked about Linux audio and his thoughts on where things currently stand with his presentation "20 years of open source audio: Success, Failure and the In-between".

#### LSP plugins 1.0.24 released

Vladimir Sadovnikov has just released version 1.0.24 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

#### Ardour 5.9 is released

Ardour 5.9 has recently been released with new features, including many improvements and fixes.

This release includes -

#### Drumgizmo 0.9.14 is released

The Drumgizmo team have officially announced version 0.9.14  of their drum sampling plugin.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

## May 15, 2017

### ardour

#### Ardour 5.9 released

Ardour 5.9 is now available, representing several months of development that spans some new features and many improvements and fixes.

Among other things, some significant optimizations were made to redraw performance on OS X/macOS that may be apparent if you are using Ardour on that platform. There were further improvements to tempo and MIDI related features and lots of small improvements to state serialization. Support for the Presonus Faderport 8 control surface was added (see the manual for some quite thorough documentation).

As usual, there are also dozens or hundreds of other fixes based on continuing feedback from wonderful Ardour users worldwide.

Read more below for the full list of features, improvements and fixes.

## May 10, 2017

### rncbc.org

#### Qtractor 0.8.2 - A Stickier Tauon release

And now for something ultimately pretty much expected: the Qstuff* pre-LAC2017 release frenzy wrap up!

Qtractor 0.8.2 (a stickier tauon) is released!

Change-log:

• Track-name uniqueness is now being enforced, by adding an auto-incremental number suffix whenever necessary.
• Attempt to raise an internal transient file-name registry to prevent automation/curve files to proliferate across several session load/save (re)cycles.
• Track-height resizing now meets immediate visual feedback.
• A brand new user preference global option is now available: View/Options.../Plugins/Editor/Select plug-in's editor (GUI) if more than one is available.
• More gradient eye-candy on main track-view and piano-roll canvases, now showing left and right edge fake-shadows.
• Fixed the time entry spin-boxes when changing time offset or length fields in BBT time format that goes across any tempo/time-signature change nodes.
• French (fr) translation update (by Olivier Humbert, thanks).

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

http://sourceforge.net/p/qtractor/wiki/

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Have fun, always.

## May 06, 2017

### Libre Music Production - Articles, Tutorials and News

#### ZARAZA releases new album entirely recorded with Libre Music tools

Ecuadorian / Canadian experimental veterans ZARAZA have just released their 3rd album Spasms of Rebirth.

It was entirely recorded using Libre Music tools:

• Fedora 25
• Ardour (all mixing)
• Guitarix (all guitars and bass)
• Calf plugins (for mixing in Ardour)
• Audacity (mastering)

#### DrumGizmo version 0.9.13 now available

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in midi and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

Included in this release is:

#### New Drumgizmo version released with major new feature, diskstreaming!

Version 0.9.13 of drum sampling plugin, Drumgizmo has recently been released with the much anticipated diskstreaming feature.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

### MOD Devices Blog

#### Tutorial: Arduino & Control Chain

Hi there once again fellow MOD-monsters! As some of you might know, we are currently in the beta testing phase for our new Control Chain footswitch extension. At the same time, we have also released the brand new Arduino Control Chain shield, allowing you to build your own awesome controllers.

If you’re thinking, hey Jesse, what is all that Control Chain talk about?

*Control Chain is an open standard, including hardware, communication protocol, cables and connectors, developed to connect external controllers to the MOD. For example, footswitch extensions, expression pedals and so on.
Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colors to indicate a specific state, etc. Pretty neat, right?

Until now, you could find two examples, for a simple momentary button and potentiometer, on our GitHub page, but today we will add a new example: we will build a Control Chain device with expression pedal inputs.

# What do I need?

1. One Arduino Uno or Due
2. One Arduino Control Chain shield
3. One stereo (TRS) jack for every expression pedal input that you want (Max: 4 (Uno), 8(Due))
4. A soldering iron, some wire and some soldering tin
5. (Optional) Something to put your final build in

# The schematic

Because the Arduino has very high impedance analog inputs, there is no need for any current limiting resistor. We can simply hook up the TRS jacks as follows: (Tip to 5V, ring to signal and sleeve to ground)

# The code

The Arduino code is quite simple, it reads the ADC values using the analogRead() function, and stores it into a variable. The Control Chain library takes care of the rest.

The code is written in such a way that you can change the define at the top of the code to the amount of ports that you want, and not have to rewrite any code. Do you want 3 expression pedal ports?

#define amountOfPorts 3

The maximum amount of ports for an Arduino Uno is 4. The Arduino Due can provide a maximum of 8 ports.

# The build

1. Solder wires to your TRS jack inputs
2. Twist the wires together
3. Solder the sleeves to the ground strip on the CC shield
4. Solder the tips to the 5v strip on the CC shield
5. Solder the rings to the corresponding analog inputs on the CC shield

Attach the CC shield to the Arduino, now your device should look a little like this:

1. Follow the instructions on our Github Page and install the dependencies
2. Change the define in the code to the amount of ports connected
4. Time for a test drive!
1. Connect the MOD Duo to the “main” Control Chain port on your new device

2. Connect your expression pedals and try them out with your MOD Duo!
5. (Optional) Create an enclosure for (semi-)permanent installation, I used an old smartphone-box that I had laying around somewhere

# The end result

You just built your own Control Chain device, and we hope with many more to come. We are looking forward to seeing what all you wonderful people come up with! Don’t hesitate to come and talk to us on the forums if you have any questions about Control Chain devices, the Arduino shield or our favourite musicians.

Talk to you later!

P.S. Vulfpeck is great

### GStreamer News

#### GStreamer 1.12.0 stable release (binaries)

Pre-built binary images of the 1.12.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## May 04, 2017

### GStreamer News

#### GStreamer 1.12.0 stable release

The GStreamer team is pleased to announce the first release in the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## May 02, 2017

### digital audio hacks – Hackaday

#### Robotic Glockenspiel and Hacked HDD’s Make Music

[bd594] likes to make strange objects. This time it’s a robotic glockenspiel and hacked HDD‘s. [bd594] is no stranger to Hackaday either, as we have featured many of his past projects before including the useless candle or recreating the song Funky town from Old Junk.

His latest project is quite exciting. He has incorporated his robotic glockenspiel with a hacked hard drive rhythm section to play audio controlled via a PIC 16F84A microcontroller. The song choice is Axel-F. If you had a cell phone around the early 2000’s you were almost guaranteed to have used this song as a ringtone at some point or another. This is where music is headed these days anyway; the sooner we can replace the likes of Justin Bieber with a robot the better. Or maybe we already have?

Filed under: digital audio hacks, robots hacks

### rncbc.org

#### Vee One Suite 0.8.2 - The Pre-LAC2017 Release frenzy continues...

The Qstuff* pre-LAC2017 release frenzy continues...

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are all joining the traditional pre-LAC release frenzy!

All available in dual form:

• a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
• a LV2 instrument plug-in.

The common change-log for this second batch release goes as follows:

• A custom knob/spin-box behavioral option have been added: Configure/Knob edit mode, as to avoid abrupt changes upon editing values (still the default behavior) and only take effect (Deferred) when enter is pressed or the spin-box loses focus.
• The main GUI has been partially revamped, after replacing some rotary knob/dial combos with kinda more skeuomorphic fake-LED radio-buttons or check-boxes.
• A MIDI In(put) status fake-LED is now featured on the bottom-left status bar, adding up to eye-candy as usual (applies to all); also, each drum element key/sample now have their own fake-LED flashing on respective MIDI note-on/off events (applies to drumkv1 only).
• Alias-free/band-limited wavetable oscillators have been fixed to prevent cross-octave, polyphonic interference. (applies to synthv1 only).
• A brand new and specific user preference option is now available as Help/Configure.../Options/Use GM standard drum names (default being yes/true/on; applies to drumkv1 only).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they are again!

## synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.2 (pre-lac2017) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

## samplv1 - an old-school polyphonic sampler

samplv1 0.8.2 (e-lac2017) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

## drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.2 (pre-lac2017) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Enjoy && share the fun ;)

## May 01, 2017

### GStreamer News

#### GStreamer 1.12.0 release candidate 2 (1.11.91, binaries)

Pre-built binary images of the 1.12.0 release candidate 2 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## April 30, 2017

### m3ga blog

#### What do you mean ExceptT doesn't Compose?

Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata.

At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus:

Using exceptions for control flow is the root of many evils in software.﻿

Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded:

Yet what to do if you want composable code? Currently I have
type Rpc a = ExceptT RpcError IO a
which is terrible

But what do we mean by "composable"? I like the wikipedia definition:

﻿Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements.

The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code.

At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text.

In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together.

So lets look at the naive version of a program that doesn't do any exception handling at all.



import Naive.Cat (Cat, parseCat)
import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection)
import Naive.Dog (Dog, parseDog)

import System.Environment (getArgs)
import System.Exit (exitFailure)

main :: IO ()
main = do
args <- getArgs
case args of
[inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile
_ -> putStrLn "Expected three file names." >> exitFailure

readCatFile :: FilePath -> IO Cat
parseCat <$> readFile fpath readDogFile :: FilePath -> IO Dog readDogFile fpath = do putStrLn "Reading Dog file." parseDog <$> readFile fpath

writeResultFile :: FilePath -> Result -> IO ()
writeResultFile fpath result = do
putStrLn "Writing Result file."
writeFile fpath $renderResult result processFiles :: FilePath -> FilePath -> FilePath -> IO () processFiles infile1 infile2 outfile = do cat <- readCatFile infile1 dog <- readDogFile infile2 result <- withDatabaseConnection$ \ db ->
processWithDb db cat dog
writeResultFile outfile result



Once built as per the instructions in the repo, it can be run with:


dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null
Writing Result file '/dev/null'.



The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown:

• Either of the two readFile calls.
• The writeFile call.
• The parsing functions parseCat and parseDog.
• Opening the database connection.
• The database connection could terminate during the processing stage.

So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things.

Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:


import           Control.Exception (SomeException)
import           Control.Error (ExceptT, fmapL, fmapLT, handleExceptT
, hoistEither, runExceptT)

import           Data.Monoid ((<>))
import           Data.Text (Text)
import qualified Data.Text as T
import qualified Data.Text.IO as T

import           Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError)
import           Improved.Db (DbError, Result, processWithDb, renderDbError
, renderResult, withDatabaseConnection)
import           Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError)

import           System.Environment (getArgs)
import           System.Exit (exitFailure)

data ProcessError
= ECat CatParseError
| EDog DogParseError
| EWriteFile FilePath Text
| EDb DbError

main :: IO ()
main = do
args <- getArgs
case args of
[inFile1, infile2, outFile] ->
report =<< runExceptT (processFiles inFile1 infile2 outFile)
_ -> do
putStrLn "Expected three file names, the first two are input, the last output."
exitFailure

report :: Either ProcessError () -> IO ()
report (Right _) = pure ()
report (Left e) = T.putStrLn $renderProcessError e renderProcessError :: ProcessError -> Text renderProcessError pe = case pe of ECat ec -> renderCatParseError ec EDog ed -> renderDogParseError ed EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg EDb dbe -> renderDbError dbe readCatFile :: FilePath -> ExceptT ProcessError IO Cat readCatFile fpath = do liftIO$ putStrLn "Reading Cat file."
bs <- handleExceptT handler $readFile fpath hoistEither . fmapL ECat$ parseCat bs
where
handler :: SomeException -> ProcessError
handler e = EReadFile fpath (T.pack $show e) readDogFile :: FilePath -> ExceptT ProcessError IO Dog readDogFile fpath = do liftIO$ putStrLn "Reading Dog file."
bs <- handleExceptT handler $readFile fpath hoistEither . fmapL EDog$ parseDog bs
where
handler :: SomeException -> ProcessError
handler e = EReadFile fpath (T.pack $show e) writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO () writeResultFile fpath result = do liftIO$ putStrLn "Writing Result file."
handleExceptT handler . writeFile fpath $renderResult result where handler :: SomeException -> ProcessError handler e = EWriteFile fpath (T.pack$ show e)

processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO ()
processFiles infile1 infile2 outfile = do
result <- fmapLT EDb . withDatabaseConnection $\ db -> processWithDb db cat dog writeResultFile outfile result  The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function. We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:  EDb (DbError3 ResultError1)  contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing. We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose. Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including: • Errors are explicit in the types of the functions, making the code easier to reason about. • Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions. • The programmer spends less time chasing the source of exceptions in large complex code bases. • More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought. Want to discuss this? Try reddit. ## April 27, 2017 ### rncbc.org #### The QStuff* Pre-LAC2017 Release frenzy started... Greetings! The Qstuff* pre-LAC2017 release frenzy is getting started... Enjoy the first batch, more to come and have fun! ## QjackCtl - JACK Audio Connection Kit Qt GUI Interface QjackCtl 0.4.5 (pre-lac2017) is now released! QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure. Website: http://qjackctl.sourceforge.net Project page: http://sourceforge.net/projects/qjackctl Downloads: http://sourceforge.net/projects/qjackctl/files Git repos: http://git.code.sf.net/p/qjackctl/code https://github.com/rncbc/qjackctl.git https://gitlab.com/rncbc/qjackctl.git https://bitbucket.com/rncbc/qjackctl.git Change-log: • On some desktop-shells, the system tray icon blinking on XRUN ocurrences, have been found responsible to excessive CPU usage, an "eye-candy" effect which is now optional as far as Setup/Display/Blink server mode indicator goes. • Added French man page (by Olivier Humbert, thanks). • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. ## Qsynth - A fluidsynth Qt GUI Interface Qsynth 0.4.4 (pre-lac2017) is now released! Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer. Website: http://qsynth.sourceforge.net Project page: http://sourceforge.net/projects/qsynth Downloads: http://sourceforge.net/projects/qsynth/files Git repos: http://git.code.sf.net/p/qsynth/code https://github.com/rncbc/qsynth.git https://gitlab.com/rncbc/qsynth.git https://bitbucket.com/rncbc/qsynth.git Change-log: • Added French man page (by Olivier Humbert, thanks). • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. ## Qsampler - A LinuxSampler Qt GUI Interface Qsampler 0.4.3 (pre-lac2017) is now released! Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer. Website: http://qsampler.sourceforge.net Project page: http://sourceforge.net/projects/qsampler Downloads: http://sourceforge.net/projects/qsampler/files Git repos: http://git.code.sf.net/p/qsampler/code https://github.com/rncbc/qsampler.git https://gitlab.com/rncbc/qsampler.git https://bitbucket.com/rncbc/qsampler.git Change-log: • Added French man page (by Olivier Humbert, thanks). • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. ## QXGEdit - A Qt XG Editor QXGEdit 0.4.3 (pre-lac2017) is now released! QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices. Website: http://qxgedit.sourceforge.net Project page: http://sourceforge.net/projects/qxgedit Downloads: http://sourceforge.net/projects/qxgedit/files Git repos: http://git.code.sf.net/p/qxgedit/code https://github.com/rncbc/qxgedit.git https://gitlab.com/rncbc/qxgedit.git https://bitbucket.com/rncbc/qxgedit.git Change-log: • Added French man page (by Olivier Humbert, thanks). • Added one decimal digit to the randomize percentage input spin-boxes on the General Options dialog. • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. ## QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast QmidiCtl 0.4.3 (pre-lac2017) is now released! QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well. Website: http://qmidictl.sourceforge.net Project page: http://sourceforge.net/projects/qmidictl Downloads: http://sourceforge.net/projects/qmidictl/files Git repos: http://git.code.sf.net/p/qmidictl/code https://github.com/rncbc/qmidictl.git https://gitlab.com/rncbc/qmidictl.git https://bitbucket.com/rncbc/qmidictl.git Change-log: • Added French man page (by Olivier Humbert, thanks). • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. ## QmidiNet - A MIDI Network Gateway via UDP/IP Multicast QmidiNet 0.4.3 (pre-lac2017) is now released! QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows. Website: http://qmidinet.sourceforge.net Project page: http://sourceforge.net/projects/qmidinet Downloads: http://sourceforge.net/projects/qmidinet/files Git repos: http://git.code.sf.net/p/qmidinet/code https://github.com/rncbc/qmidinet.git https://gitlab.com/rncbc/qmidinet.git https://bitbucket.com/rncbc/qmidinet.git Change-log: • Added new and replaced old system-tray menu icons. • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps. License: All of the Qstuff* are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later. Enjoy && keep the fun! ### GStreamer News #### GStreamer 1.12.0 release candidate 2 (1.11.91) The GStreamer team is pleased to announce the second release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes. A initial, unfinished version of the release notes can be found here already. Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days. Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx. ## April 26, 2017 ### open-source – CDM Create Digital Music #### ArduTouch is an all-in-one Arduino synthesizer learning kit for$30

This looks like a near-perfect platform for learning synthesis with Arduino – and it’s just US$30 (with an even-lower$25 target price).

It’s called ArduTouch, a new Arduino-compatible music synth kit. It’s fully open source – everything you need to put this together is available on GitHub. And it’s the work of Mitch Altman, something of a celebrity in DIY/maker circles.

Mitch is the clever inventor of the TV B-Gone – an IR blaster that lets you terminate TV power in places like airport lounges – plus brainwave-tickling gear like the Neurodreamer and Trip Glasses. (See his Cornfield Electronics manufacturer.) Indeed, some ten years ago when CDM hosted its first MusicMakers / Handmade Music event in New York, Mitch happened to be in town and put us all in a pleasant, totally drug-free trance state with his glasses. He’s also a music fan, though, so it’s great to see him get back into music synthesis.

And ArduTouch is hugely clever. It’s an Arduino clone, but instead of just some headers and pins for connecting wires (boring), it also adds a PCB touch keyboard for playing notes, some extra buttons and pots so you can control sounds, and an all-important amp and speaker, so you can hear the results on just the board. (You’ll obviously want to plug into extra gear for more power and loudness.)

You don’t have to code. Just put this together, and you can start making music.

That’s already pretty cool, but the real magic comes in the form of two additional ingredients:

Software. ArduTouch is a new library that enables the synthesis capabilities of the board. This means you can also customize synth functionality (like adding additional control or modifying the sound), or create your own synths.

Tutorials. When you want to go deeper, the other side of this is a set of documentation to teach you the basics of DSP (digital signal processing) using the board and library.

In other words, what you’ve got is an all-hardware course on DSP coding, on a $30 board. And that’s just fabulous. I’ve always thought working on a low-level with hardware is a great way to get into the basics, especially for those with no previous coding background. Looks like I’ve got a summer project. Stay tuned. And thanks, Mitch! This obviously needs videos and sound samples and the like so — guess we should get on that! https://github.com/maltman23/ArduTouch In the meantime, though, here’s Mitch with some great inspiration on what hacking and making is about. Mitch is uncommonly good at teaching and explaining and generally being a leader for all kinds of people worldwide. Have a look: He also walks people through the hackerspace movement and where it came from – especially meaningful to us, as the hacklabs and knowledge transfer projects we host are rooted directly in this legacy (including via Mitch’s own contributions). This talk is really must-watch, as it’s one of the best explanations I’ve seen on what this is about and how to make it work: Don’t know how to solder? Mitch has you covered: And for a how-to that’s equally important, Mitch talks about how to do what you love: The post ArduTouch is an all-in-one Arduino synthesizer learning kit for$30 appeared first on CDM Create Digital Music.

## April 24, 2017

### Audio – Stefan Westerfeld's blog

#### 24.04.2017 spectmorph-0.3.2 and www.spectmorph.org updates

Finally after taking the time to integrate improvements, spectmorph-0.3.2 was released. The main feature is certainly the new unison effect. By adding up multiple detuned copies of the same sound, it produces the illusion of multiple instruments playing the same notes. Of course this is just a cheap approximation of what it would sound like if you really recorded multiple real instruments playing the same notes, but it at least makes the sound “seem” more fat than without the effect.

At the same time, the website was redesigned and improved. Besides the new look and feel, there is now also a piece of music called “Morphing Motion” which was made with the newest version of the SpectMorph VST plugin.

Visit www.spectmorph.org to get the new version or listen to the audio demos.

## April 20, 2017

### Libre Music Production - Articles, Tutorials and News

#### Unfa : Helm's deep

In this extensive video, Unfa uses Helm, the new light gun of the Linux Audio Synthesis arsenal, to compose a full drums+bass+melody track on the fly!

# Combining text and music

If you want to create a document with lots of text and some small musical snippets, e.g. an exercise sheet or a musical analysis, what software can you use?

Of course it’s possible to do the entire project in LilyPond or another notation program, inserting passages of text between multiple scores – in LilyPond by combining \markup and \score sections:

\markup { "A first motif:" }
\score \relative c' { c4 d e f  g2 g }
\markup { "A second motif:" }
\score \relative c'' { a4 a a a  g1 }

However, it is clear that notation programs are not originally designed for that task, so many people prefer WYSIWYG word processors like LibreOffice Writer or Microsoft Word that instantly show what the final document will look like. In these text documents music fragments can be inserted as image files that can for example be generated with LilyPond from .ly input files. Of course these images are then static, and to be able to modify the music examples one has to manage the additional files with some care. That’s when things might get a little more complicated…

Wouldn’t it be a killer feature to be able to edit the scores directly from within the word processor document, without having to keep track of and worry about additional files? Well, you may be surprised to learn that this has already been possible for quite some time, and I take the relaunch of OOoLilyPond as an opportunity to show it to you.

## What is OOoLilyPond?

OOoLilyPond (OLy) is a LibreOffice extension that allows to insert snippets of LilyPond code into LibreOffice Writer, Draw and Impress documents and to transparently handle the rendering through LilyPond. So you can write LilyPond code, have that rendered as a score and be able to modify it again.

OOoLilyPond was originally written by Samuel Hartmann and had its first launch in 2006 (hence the name, as the only open-source successor of StarOffice was OpenOffice.org).
Samuel continued the development until 2009 when a stable version 0.4.0 with new features was released. In the following years, OLy was ocasionally mentioned in LilyPond’s user forums, so there might be several people who use it periodically – including myself. Being a music teacher, I work with it everyday. Well, almost…

In 2014 LilyPond had the new 2.19 release which showed a different behaviour when invoked by the command used in OLy. This lead to a somewhat mysterious error message, and the macro execution was aborted. Therefore it was impossible to use OLy with LilyPond’s new development versions. Of course, I googled that problem, but there was no answer.

Someday I wanted to get to the bottom of it. I’m one of those guys who have to unscrew anything they get their hands on. OLy is open source and published under GPL, so why hesitate? After some studying the code, I finally found that the problem was surprisingly small and easy to fix. I posted my solution on the LilyPond mailing list and also began to experiment with new features.

Urs Liska and Joram Berger had already contacted Samuel in the past. They knew that he did not have the time to further work on OOoLilyPond, but he would be glad if someone else could take over the development of the project.

Urs and Joram also contributed lots of work, knowledge and ideas, so that we were finally able to publish a new release that can be adapted to the slightly different characteristics of LibreOffice and OpenOffice, that can be translated into other languages, that can make use of vector graphics etc. This new take on the project now has its home within openLilyLib.

# How to get and install it

The newest release will always be found at github.com/openlilylib/LO-ly/releases where the project is maintained. Look for an *.oxt file with a name similar to OOoLilyPond-0.X.X.oxt and download it:

For anyone who doesn’t want to read the release notes, there’s a simple Download page as well.

In LibreOffice, open the extension manager (Tools -> Extension Manager), click the “Add” button which will open a file dialog. Select the *.oxt file you’ve just downloaded and confirm with the “Open” button.

When asked for whom you want to install the extension, you can choose “only for me” which won’t require administrator privileges on your system. After successful installing, close the extension manager, and probably you will be requested to restart LibreOffice.

Now LibreOffice will have a new “OLy” toolbar. It contains a single “OLy” button that launches the extension.

# Launching for the first time

Here we go: Create a new Writer document and click the OLy button. (Don’t worry if you get some error messages telling you that LilyPond could not be executed. Just click “OK” to close the message boxes. We’ll fix that in a moment.)

Now you should see the OOoLilyPond Editor window.

First, let’s open the configuration dialog by clicking the “Config” button at the bottom:

A new window will pop up:

Of course, you need to have LilyPond installed on your system. In the “LilyPond Executable” field, you need to specify the executable file for LilyPond. On startup, OLy has tried to guess its correct (default) location. If that didn’t work, you already got some error messages.

For a Windows system, you need to know the program folder (probably C:\Program Files (x86)\LilyPond on 64-bit Windows or C:\Program Files\LilyPond on 32-bit Windows systems).
In the subfolder \usr\bin\ you will find the executable file lilypond.exe.

If you are working with Linux, relax and smile. Usually, you simply need to specify lilypond as command, without any path settings. As far as I know, that also applies for the Mac OS family which is based on Unix as well.

On the left side, there are two frames titled “Insert Images”. Depending on the Office version you are using (OpenOffice or LibreOffice), you should click the appropriate options.

For the moment, all remaining settings can be left at their default values. In case you’ve messed up anything, there’s also a “Reset to Default” button.

At the right bottom, click “OK” to apply the settings and close the dialog. Now you are back in the main Editor window. It contains some sample code, so just click the “LilyPond” button at the bottom right.

In the background, LilyPond is now translating the code into a *.png graphic file which will be inserted into Writer. The code itself is invisibly saved inside the document.

After a few seconds, the editor window should disappear, and a newly created image should show up.

# How to work with it

If you want to modify an existing OLy object, click on it to select it in Writer. Then, hit the “OLy” button.

The Editor window will show the code as it has been entered before. Here you can modify it, e.g. change some pitches (there’s also no need to keep the comments) and click the “LilyPond” button again. OLy will generate a new image and replace the old one.

To insert a new OLy object, just make sure that no existing object is selected when hitting the “OLy” button.

# Templates

In the Editor window, you might have noticed that you were not presented an entire LilyPond file, but only an excerpt of it. This is because OLy always works with a template. It allows you to quickly enter short snippets while not having to care about any other settings for layout etc.

The snippet you just created is based on the template Default.ly which looks (more or less) like this:

\transpose %{OOoLilyPondCustom1%}c c'%{OOoLilyPondEnd%}
{
%{OOoLilyPondCode%}
\key e \major
e8 fis gis e fis8 b,4. |
e2\fermata \bar "|."
%{OOoLilyPondEnd%}
}

\include "lilypond-book-preamble.ly"
#(set-global-staff-size %{OOoLilyPondStaffSize%}20%{OOoLilyPondEnd%})

\paper {
#(define dump-extents #t)
ragged-right = ##t
line-width = %{OOoLilyPondLineWidth%}17\cm%{OOoLilyPondEnd%}
}

\layout {
indent = #0
\context {
\Score
\remove "Bar_number_engraver"
}
}


In the Editor window, there are five text fields: the big “Code” area on top, and four additional small fields named “Line Width”, “Staff Size”, “Custom 1” and “Custom 2”. They contain the template parts that are enclosed by tags, i.e. preceeded by %{OOoLilyPondCode%}, %{OOoLilyPondLineWidth%}, %{OOoLilyPondStaffSize%}, %{OOoLilyPondCustom1%} and %{OOoLilyPondCustom2%} respectively, each terminated by %{OOoLilyPondEnd%}. (Those tags themselves are ignored by LilyPond because they are comments.)

All remaining parts of the template stay “invisible” to the user and cannot be changed. Don’t worry, you can modify existing templates and create your own.

A template must at least have a Code section, other sections are optional. There is a template Direct to LilyPond which only consists of a Code section and contains no “invisible” parts at all. You can use it to paste ordinary *.ly files into your document. But please keep in mind that the resulting graphic should be smaller than your paper size.

Most templates (the ones without [SVG] inside the file name) make use of \include "lilypond-book-preamble.ly which results in a cropped image. Any whitespace around the music is automatically removed.

Below the code view, there is a dropdown field that lets you choose which template to use. Of course, different templates have different default code in their Code sections.

When switching the template, the code field always will update to the corresponding default code as long as you haven’t made any edits yet. However, this will not happen automatically if you already made any changes. To have your current code replaced anyway, click the “Default Code” checkbox.

The “Edit” button will open a new dialog where you can edit the current template. Optionally, you can save it under a new file name.

# Easier editing

Probably you are used to a particular text editor when working on LilyPond files. Of course you can use it for OLy templates as well. The path to the template files can be found (and changed) in the configuration dialog. Here you can also specify where your text editor’s executable file is located. You can use any text editor like Mousepad, Notepad etc., but if you don’t yet know Frescobaldi, you really should give it a try.

Back in the main OLy window, another button might be useful: “Open as temp. file in Ext. Editor”. It saves the entire snippet into a *.ly file – not only the contents of the “Code” field, but including the other fields and the “invisible” parts between them. This file is opened in the external editor you’ve specified before. If you use an IDE like Frescobaldi, you can instantly preview your changes.

As soon as editing is finished, save your changes (without changing the file name). You can now close your external editor.

Back in OLy, hit the “Import from temp. file” button to load the updated file back into OLy. In the text fields you will recognize the changes you have applied. Hit the “LilyPond” button to insert the graphic into your document.

A word of caution: Only changes to the Code, Line Width, Staff Size, Custom 1 and Custom 2 fields are recognized. Changes to the “invisible” parts of the template are ignored! If you intend to modify those sections as well, you need to create a new template.

A very last word of caution: If you use a template that is modified or created by yourself, and you share your Office document with other collaborators, you have to share your template as well.

## To be continued…

OLy can be configured for using vector graphic formats (*.svg or *.eps) instead of *.png. They offer better quality, especially for printing. However, some additional things will have to be considered. This will soon be covered in a follow-up post.

## April 13, 2017

### News – Ubuntu Studio

#### Ubuntu Studio 17.04 Released

We are happy to announce the release of our latest version, Ubuntu Studio 17.04 Zesty Zapus! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]

## April 10, 2017

### GStreamer News

#### GStreamer 1.12.0 release candidate 1 (1.11.90, binaries)

Pre-built binary images of the 1.12.0 release candidate 1 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

### aubio

#### 0.4.5 released

A new version of aubio, 0.4.5, is available.

This version features:

• a new aubio python command line tool to extract information from sound files
• improved default parameters for onset detection, using adaptive spectral whitening and compression
• support for libswresample

New options --miditap-note and --miditap-velo have been added to aubioonset and aubiotrack to adjust the note and velocity of the midi note emitted by onsets and beats.

0.4.5 also comes with a bunch of fixes, including improved documentation, build system fixes, and platform compatibility.

Many thanks to Martin Hermant (@MartinHM), Sebastian Böck (@superbock), Travis Seaver (@tseaver) and others for their help and contributions.

### digital audio hacks – Hackaday

#### Custom Media Center Maintains Look of 70s Audio Components

Slotting a modern media center into an old stereo usually means adding Bluetooth and a Raspberry Pi to an amp or receiver, and maybe adding a few discrete connectors on the back panel. But this media center for a late-70s Braun hi-fi (translated) goes many steps beyond that — it fabricates a component that never existed.

The article is in German, and the Google translation is a little spotty, but it’s pretty clear what [Sebastian Schwarzmeier] is going for here. The Braun Studio Line of audio components was pretty sleek, and to avoid disturbing the lines of his stack, he decided to create a completely new component and dub it the “M301.”

The gutted chassis of an existing but defunct A301 amplifier became the new home for a Mac Mini, Blu-Ray drive, and external hard drive. An HDMI port added to the back panel blends in with the original connectors seamlessly. But the breathtaking bit is a custom replacement hood that looks like what the Braun designers would have come up with if “media center” had been a term in the 70s.

From the brushed aluminum finish, to the controls, to the logo and lettering, everything about the component that never was shows an attention to detail that really impresses. But if you prefer racks of servers to racks of audio gear, this media center built into a server chassis is sure to please too.

Thanks to [Sascho] and [NoApple4Me] for the nearly simultaneous tips on this one.

Filed under: classic hacks, digital audio hacks

## April 07, 2017

### GStreamer News

#### GStreamer 1.12.0 release candidate 1 (1.11.90)

The GStreamer team is pleased to announce the first release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

## April 04, 2017

### Linux – CDM Create Digital Music

#### Waveform woos DAW switchers with clean UI, features, Raspberry Pi

The struggle to make an all-in-one computer production tool that’s different continues.

Tracktion, a lesser-known “indie” DAW that has seen a rapid resurgence in recent builds, is now back in a new generation version dubbed Waveform. As usual, the challenge is to make something that kind of does everything, and necessarily needs to do all the things the competition does, while still being somehow different from that competition.

Waveform’s answer is to build on Tracktion’s clean UI by making it yet more refined. It builds on its open workflow by adding modular mixing and enhanced “racks” for processing. And it runs on Linux – including Ubuntu 14.04 and 16.04, the Mate desktop environment, and ultra-cheap Raspberry Pi hardware.

For producers, there’s also some sweeteners added. There’s an integrated synth/sampler called Collective. And you get a whole bunch of MIDI generators, interestingly – atop a step sequencer already included with Tracktion, there are new pattern and chord generators. That’s interesting, in that it moves the DAW into the territory of things like FL Studio – or at the very least assumes you may want to use this environment to construct your ideas.

Oh yeah, and there’s a cute logo that looks, let’s be honest here, very reminiscent of something that rhymes with Bro Drools. (sorry)

Obligatory promo vid:

It looks attractive, certainly, and seems to go up against the likes of Studio One for clean-and-fresh DAW (plus standbys like Reaper). But this is a crowded field, full of people who don’t necessarily have time to switch from one tool to another. Pricing runs $99-200 for the full version depending on bundled features, and upgrades are$50 or free for Tracktion users — meaning they’ll be happy, I suspect.

If you’re up for reviewing this, let us know.

https://www.tracktion.com/products/waveform

The post Waveform woos DAW switchers with clean UI, features, Raspberry Pi appeared first on CDM Create Digital Music.

## April 03, 2017

### open-source – CDM Create Digital Music

Modular synthesizers present some beautiful possibilities for sound design and composition. For constructing certain kinds of sounds, and certain automated rhythmic and melodic structures, they’re beautiful – and endure for a reason.

Now, that description could fit both software and hardware modulars. And of course, hardware has some inarguable, irreplaceable advantages. But the same things that make it great to work with can also be limiting. You can’t dynamically change patches without some plugging and replugging, you’re limited by what modules you’ve got bolted into a rack, and … oh yeah, apart from size and weight, these things cost money.

So let’s sing the praises of computers for a moment – because it’s great that we can choose either, or both.

Money alone is reason. I think anyone with a cheap-ass laptop and absolutely no cash should still get access to the joy of modular. Deeper pockets don’t mean more talent. And beyond that, there are advantages to working with environments that are dynamic, computerized, and even open and open source. That’s true enough whether you use them on their own or in conjunction with hardware.

Enter Automatonism, by Johan Eriksson.

It’s free, it’s open source, it’s a collection of modules built in Pure Data (Pd). That means you can run it on macOS, Windows, and Linux, on a laptop or on a Raspberry Pi, or even build patches you use in games and apps.

And while there are other free modular tools for computers, this one is uniquely hardware modular-like in its design — meaning it’s more approachable, and uses the signal flow and compositional conception from that gear. Commercial software from Native Instruments (REAKTOR Blocks) and Softube (Modular) have done that, and with great sound and prettier front panels, but this may be the most approachable free and open source solution. (And it runs everywhere Pd runs, including mobile platforms.)

Sure, you could build this yourself, but this saves loads of time.

You get 67 modules, covering all the basics (oscillators and filters and clocks and whatnot) and some nice advanced stuff (FM, granular delays, and so on).

The modules are coupled with easy-to-follow documentation for building your basic West Coast and East Coast synth patches, too. And the developer promises more modules are coming – or you can build your own, using Pd.

Crucially, you can also use all of this in real-time — whereas Pd normally is a glitchy mess while you’re patching. Johan proves that by doing weird, wonderful live patching performances:

If you know how to use Pd, this is all instantly useful – and even advanced users I’m sure will welcome it. But you really don’t need to know much about Pd.

The developer claims you don’t need to know anything, and includes easy instructions. But you’ll want to know something, as the first question on the video tells me. Let’s just solve this right now:

Q. I cannot get my cursor to change from the pointer finger to an arrow. I can drag modules and connect them but I can’t change any parameters. What am I missing?

A. That’s because Pure Data has two modes of operation: EDIT mode and PERFORMANCE mode. EDIT mode, the pointer finger, lets you drag stuff around and connect cables, while PERFORMANCE mode, the arrow, lets you interact with sliders and other GUI objects. Swap between the two easily under the EDIT menu in Pure Data or by shortcut cmd+e [ctrl-e Windows/Linux]

This is also a bit like software-with-concept album, as the developer has also created a wild, ear-tickling IDM EP to go with it. This should give you an idea of the range of sounds possible with Automatonism; of course, your own musical idiom can be very different, if you like, using the same tools. I suspect some hardware lovers will listen to this and say “ah, that sounds like a computer, not warm analog gear.” To that, I say… first, I love Pd’s computer-ish character, and second, you can design sounds, process, mix, and master to make the end result sound like anything you want, anyway, if you know what you’re doing.

Johan took a pretty nerdy, Pd purist angle on this, and … I love it for what it is!

AUTOMATONISM #1 by Automatonism

But this is truly one of the best things I’ve seen with Pd in a long time — and perhaps the best-documented project for the platform yet, full stop.

It’s definitely becoming part of my music toolkit. Have a look:

https://www.automatonism.com/

The post This software is like getting a modular inside your computer, for free appeared first on CDM Create Digital Music.

## April 01, 2017

### digital audio hacks – Hackaday

#### Cerebrum: Mobile Passwords Lifted Acoustically with NASB

There are innumerable password hacking methods but recent advances in acoustic and accelerometer sensing have opened up the door to side-channel attacks, where passwords or other sensitive data can be extracted from the acoustic properties of the electronics and human interface to the device. A recent and dramatic example includes the hacking of RSA encryption  simply by listening to the frequencies of sound a processor puts out when crunching the numbers.

Now there is a new long-distance hack on the scene. The Cerebrum system represents a recent innovation in side-channel password attacks leveraging acoustic signatures of mobile and other electronic devices to extract password data at stand-off distances.

Research scientists at cFREG provide a compelling demonstration of the Cerebrum prototype. It uses Password Frequency Sensing (PFS), where the acoustic signature of a password being entered into an electronic device is acquired, sent up to the cloud, passed through a proprietary deep learning algorithm, and decoded. Demonstrations and technical details are shown in the video below.

Many of these methods have been shown previously, as explained by MIT researcher T. M. Gil in his iconic paper,

“In recent years, much research has been devoted to the exploration of von Neumann machines; however, few have deployed the study of simulated annealing. In fact, few security experts would disagree with the investigation of online algorithms [25]. STEEVE, our new system for game-theoretic modalities, is the solution to all of these challenges.”

To counter this argument, the researchers at cFREG have taken it to a much higher and far more accurate level.

## Measurements

The Cerebrum team began their work by prototyping systems to increase the range of their device. The first step was to characterize the acoustic analog front end and transducers with particular attention paid to the unorthodox acoustic focusing element:

The improvements are based on the ratio of Net Air-Sugar Boundaries (NASB) using off-the-shelf marshmallows. Temperature probing is integral for calibrating this performance, and with this success they moved on to field testing the long-range system.

## Extending the Range

The prototype was tested by interfacing a magnetic loop antenna directly onto the Cerebrum through a coax-to-marshmallow transition. By walking the street with a low-profile loop antenna, numerous passwords were successfully detected and decoded.

## War Driving with PFS

To maximize range, additional antenna aperture were added and mounted onto a mobile platform including a log periodic, an X-band parabolic dish, and a magnetic loop antenna to capture any and all low frequency data. In this configuration it was possible to collect vast quantities of passwords out to upwards of ½ of a mile from the vehicle resulting in a treasure trove of passwords.

Without much effort the maximum range and overall performance of the Cerebrum PFS was dramatically increased opening up a vast array of additional applications. This is an existing and troubling vulnerability. But the researchers have a recommended fix which implements meaningless calculations into mobile devices when processing user input. The erroneous sound created will be enough to fool the machine learning algorithms… for now.

Filed under: digital audio hacks, Fiction, security hacks

## March 30, 2017

### Linux – CDM Create Digital Music

#### The music software that’s everywhere is now in the browser too: SunVox Web

Oh, sure, some developers think it’s a big deal if their software runs on Mac and Windows. (Whoo!) SunVox has a different idea of cross-platform – a slightly more complete one.

Alexander Zolotov is a mad genius. His SunVox has all the patchable sound design of a modular synth. But it also has all the obsessive-compulsive pattern editing of a tracker. So on any single platform, it’s already two tools in one.

And it doesn’t run on just one single platform. It’s on Windows (pretty much any version). It’s on macOS – all the way back to 10.6. (Kudos, Alexander is I think the only person outside Apple apart from me who correctly types that “macOS” according to recently-revised Apple convention.)

It runs on Linux. Oh, does it ever. It runs on 64-bit Linux. It runs on 32-bit Intel Linux. It runs on Raspberry Pi and ARM devices. It runs on 64-bit ARM devices (PINE64). It runs on the PocketCHIP. It runs on Maemo. It runs on MeeGo.

It runs on iOS – all the way back to 7.0.

It runs on Anrdoid – back to the truly ancient 2.3.

It runs on Pocket PC and Windows Mobile and Windows CE. (just on ARM, but … who’s counting at this point?)

It runs on PalmOS.

You get the idea.

And now, impossibly, Alexander has gotten the whole thing running in JavaScript. The library is just 1.3 MB – so (shamefully) probably smaller than the page on which you’re reading this. And song files are just a few kilobytes.

That’s a big deal, because not only does this mean you could soon be running SunVox in your browser, but Alexander promises a library other websites could use, too.

This is all experimental, but I think it’s a big sign of things to come. Check it out here:

http://warmplace.ru/soft/sunvox/jsplay/

The other reason to talk about this now is the secret sauce that’s running this. You could turn your own efficient C code into browser form, too. I don’t think that necessarily means you’ll want to release music software in-browser, but it could be a huge boon to educational applications (for example), and certainly to open source projects that run on both hardware and software:

http://kripken.github.io/emscripten-site/

Anyway, while you wait, you still have no excuse for not running SunVox if you aren’t already, unless you’re using a really non-standard platform. (Yes, Jane, I see you with your AMIGA.)

Or, you know, hit play and sing along.

The post The music software that’s everywhere is now in the browser too: SunVox Web appeared first on CDM Create Digital Music.

### Scores of Beauty

#### Supporting Multiple LilyPond Versions

LilyPond’s input language occasionally evolves to accomodate new features or to simplify how things can be expressed. Sometimes these syntax changes can break compatibility of existing documents with newer versions of LilyPond, but upgrading is generally a painless process thanks to the convert-ly script that in most cases can automatically update input files. However, sometimes it is necessary to write LilyPond files that are compatible with multiple LilyPond versions, for example when creating libraries like openLilyLib. The key to this is writing conditional code that depends on the currently executed LilyPond version, and in this post I will describe how this has just become easier with LilyPond.

#### Breaking Changes

Occasionally improvements in LilyPond development require changes in the input syntax that don’t allow an input file to be compiled with LilyPond versions both before and after a certain version. If you are working with the development versions this can occur more often, as trying and changing new syntax is usually the most “unstable” aspect of the so-called unstable version. I will give just one recent example.

For ages you had to provide two obscure arguments parser and location when defining music functions

oldStyleRepeat =
#(define-music-function (parser location my-music)(ly:music?)
#{
#my-music #my-music
#})

This function simply returns the given music expression twice, and the parser and location arguments don’t do anything here. In fact – and this had always confused me – you could switch the two arguments or even use totally different names. This is because LilyPond implicitly passes two arguments to the function, an object representing the current input parser and one representing the input location from which the function is called. These are then bound to the names given in the function definition, which by convention are the two obvious names for these arguments. But as long as the arguments are not actually used in the function body they and their names are basically irrelevant. Two use cases where they are used are shown in the following two functions:

oldStyleMessage =
#(define-void-function (parser location msg)(string?)
(ly:input-message location msg))

oldStyleInclude =
#(define-void-function (parser location filename)(string?)
(ly:parser-include-string parser (format "\\include \"~a\"" filename)))

\oldStyleMessage prints the given text and provides a link to the input location, the place in the file where the function is called from, \oldStyleInclude tries to include a file with the given name. Each function makes use of one of the two arguments.

With LilyPond 2.19.22 David Kastrup introduced *location* and *parser* that allow you to obtain these two objects directly from within any Scheme function. (Technically speaking these are Guile “fluids“.) As a result the corresponding arguments in the function signature are not mandatory anymore and don’t have to clutter each and every function definition. The first of the two functions can now be rewritten as

% LilyPond 2.19.22 and later
newStyleMessage =
#(define-void-function (msg)(string?)
(ly:input-message (*location*) msg))

while for the second one it’s the function ly:parser-include-string that has been changed to not expect a parser argument anymore, leading to the simplified definition

% LilyPond 2.19.22 and later
newStyleInclude =
#(define-void-function (filename)(string?)
(ly:parser-include-string (format "\\include \"~a\"" filename)))

This is much clearer to read or write as only the arguments that are actually needed for the purpose of the function are present. It is noteworthy that for now the old syntax will still work with newer LilyPond versions. The parser and location arguments are still passed implicitly to each music-, scheme- or void-function, so \oldStyleMessage will still work (because location still works). However, \oldStyleInclude will fail because ly:parser-include-string doesn’t expect the parser argument anymore. This is a typical case where convert-ly will properly handle the breaking syntax change, leaving you with a file that can only be compiled with current LilyPond, or more concretely: with LilyPond >= 2.19.22. So for supporting both stable and development versions an alternative approach is required.

#### Conditional Execution Based On LilyPond Version

If LilyPond is executed it can (of course) tell which version it is, and this information is not only available through the command line switch

~$lilypond --version GNU LilyPond 2.19.57 Copyright (c) 1996--2017 by Han-Wen Nienhuys Jan Nieuwenhuizen and others. This program is free software. It is covered by the GNU General Public License and you are welcome to change it and/or distribute copies of it under certain conditions. Invoke as lilypond --warranty' for more information.  but also from within LilyPond through the ly:version function which returns a list with three numbers representing the currently executed version: #(display (ly:version)) => (2 19 57)  From here it is not too difficult to create functionality that tests a given reference version against the currently executed LilyPond version. I will not go into any detail about it (because this is not a Scheme tutorial), but if you want you can inspect for yourself how this is implemented in openLilyLib. It includes lilypond-version-predicates which are used for example in the code that loads module files (see this piece of code in the module-handling.ily file): #(if (lilypond-greater-than? "2.19.21") (ly:parser-include-string arg) (ly:parser-include-string parser arg))))) This way LilyPond uses the correct syntax to include an external file, depending on the version of LilyPond that is currently running. #### Version Comparison Now Built Into LilyPond I have come to love this possibility because it makes it possible to write library functions that support multiple LilyPond versions, a differentiation which is usually between the current stable and the current development version, but sometimes one also has to consider whether to support previous stable versions. So it was natural that I wanted to integrate the functionality into LilyPond itself – to make it independent from openLilyLib – and since LilyPond 2.19.57 the new function ly:version? is available. (Note that this functionality is only available to test for LilyPond versions starting with 2.19.57, so if you need to support older versions from the 2.18/2.19 line you will have to look into the openLilyLib implementation.) ly:version? op ver  is the signature of this function, where op is an arithmetic operator (=, <, >, <= or >=) and ver a list of up to three numbers representing the LilyPond version you want to compare to. So assuming you run LilyPond 2.21.13 (which is a future thing while writing this post) the expression (ly:version? > '(2 21 12) would return #t, as would (ly:version? <= '(2 22 0). One interesting thing about the new function is its behaviour when it comes to incomplete reference version lists. As you will know all the following version statements are valid in LilyPond: \version "2.21.12" \version "2.21" \version "2" each giving less specific information. ly:version? handles this correctly in all cases, so the following cases are properly evaluated: 2.21.12 = 2.21 2.21.12 2.22 2.21 > 2.20.2 2 3 2.99 3 2.19.15 >= 2.19  So the above example could now be rewritten without depending on openLilyLib (but of course this doesn’t work for this example because – as I said – the new function can’t compare LilyPond versions earlier than 2.19.57): #(if (ly:version? > '(2 19 21)) (ly:parser-include-string arg) (ly:parser-include-string parser arg))))) ## March 29, 2017 ### digital audio hacks – Hackaday #### Friday Hack Chat: Audio Amplifier Design Join [Jørgen Kragh Jakobsen], Analog/digital Design Engineer at Merus-Audio, for this week’s Hack Chat. Every week, we find a few interesting people making the things that make the things that make all the things, sit them down in front of a computer, and get them to spill the beans on how modern manufacturing and technology actually happens. This is the Hack Chat, and it’s happening this Friday, March 31, at noon PDT (20:00 UTC). Jørgen’s company has developed a line of multi level Class D amplifiers that focus on power reduction to save battery life in mobile application without losing audio quality. There are a lot of tricks to bring down power consumption, some on core technologies on transistor switching, others based on input level where modulation type and frequency is dynamically changed to fit everything from background audio level to party mode. ### Here’s How To Take Part: Our Hack Chats are live community events on the Hackaday.io Hack Chat group messaging. Log into Hackaday.io, visit that page, and look for the ‘Join this Project’ Button. Once you’re part of the project, the button will change to ‘Team Messaging’, which takes you directly to the Hack Chat. You don’t have to wait until Friday; join whenever you want and you can see what the community is talking about. ### Upcoming Hack Chats We’ve got a lot on the table when it comes to our Hack Chats. On April 7th, our host will be [Samy Kamkar], hacker extraordinaire, to talk reverse engineering. Filed under: digital audio hacks, Hackaday Columns ### open-source – CDM Create Digital Music #### dadamachines is an open toolkit for making robotic musical instruments There was a time when using controllers to play music was still novel. Building them was a technically complicated task, limited to a handful of individuals – most of whom had to keep solving the same basic problem of how to get started over and over again. Now, we know, that’s no longer the case. There are controllers everywhere. You can buy a finished one off the shelf. If you want to customize and modify that, it’s easier than ever before. If you want to make your own, that’s easier than before, too. And the result is that musicians separate themselves by making their music special – by practicing and creating something uniquely theirs. Now, it seems that a friendly little niche of electronic music making is poised to open up for robotic instruments. (As my friend Donald Bell so nicely put it, quoted on the Kickstarter here, “tinkertechno.”) I’ve been watching the evolution of Johannes Lohbihler’s dadamachines project as it’s evolved over a period of years. And yes, the first thing to know is — you can bang stuff with it! Now, that might alone be enough – banging things is fun for just about all humans. But there’s more here than that. If you think of a hardware controller as a way of turning physical input into digital music, this really is a glimpse of what happens when you make digital music into physical output. And the cleverest thing Johannes has done is to nicely productize the core of the system. The automat controller box, the brains of the operation, lets you quickly plug in anything 12 volt. That’s nice, in that there hasn’t been any plug-and-play solution for that. So whether it’s a solenoid (those things plunking stuff) or a motor or anything else that runs on 12 volt, connections are easy. There’s a USB connection for a computer/tablet, but you can also unplug the computer and just use MIDI in and out. And it comes in a nice case – which, sorry, actually makes a really big difference for real-world use! The whole box reminds me of the first analog and MIDI connections for studio equipment. It has that same musician-friendly look – and feels like something that could really open up the way you work. Now, from there, dadamachines bundle various larger kits of stuff. So if you aren’t quite ready to hack together your own solutions, you can start playing right away — just like buying a percussion instrument. These are also really nicely thought out, adding power adapters, the robotic solenoids, and other percussive elements (as seen in the video). Don’t be put off by the pricing of the bigger kits – a basic “(M)”edium-sized kit runs €399. (and believe me, otherwise add up the amount you could spend on DIY mistakes…) The different variations (explained on Kickstarter) allow you to do real-world percussion with objects of different sizes, shapes, and orientations. Some produce sound by bouncing materials off a speaker; some sit atop objects and hit them. One is a mallet; a LEGO adapter makes prototyping really easy. I’m picking up an evaluation kit today, so stay tuned to CDM and we’ll try to do an interesting review for you. Keep in mind that while that may seem to give away the novelty here, what you do with these instruments is up to you. You’ve now left the digital domain and are in the acoustic world — so the creativity really comes from what real-world materials you use and the musical patterns you devise. (Think of how much variety people have squeezed out of the TR-808 over the years – the limits here are much broader.) But for people who do go deeper, this is open source hardware. Everything is Arduino-based and looks easy to hack. The GitHub site isn’t live until after the campaign (I’ll let you discuss the relative merits of whether or not projects like this should do this), but from what I’ve seen, this looks really promising. And it’s still a lot easier than trying to do this yourself with Arduino – even just solving the case is a big boon. I imagine that could lead to other parallel projects. In fact, I think this whole area will do better if there are more things like this — looking to the models of controllers, MIDI, Eurorack, and even recent developments like Ableton Link as great examples. I’ll be at the launch party tonight checking this out. Tech Specs automat controller Connectivity – USB Midi – DIN Midi-In & Thru (Out option) – 12 DC Outputs (12-24V max. 1.3A) – External power supply 12-24V – Arduino shields & extension port Software – Simple learn mode >1 button click – Advanced learn mode Hardware – Anodized aluminum panel – Powder coated steel shell – Dimensions – 110 x 110 x 26mm Additionally, each toolkit comes with adapters & elements helping the users to get started easily. More: Kickstarter: http://bit.ly/dadakick Web: http://dadamachines.com The post dadamachines is an open toolkit for making robotic musical instruments appeared first on CDM Create Digital Music. ## March 27, 2017 ### digital audio hacks – Hackaday #### [Joe Grand’s] Toothbrush Plays Music That Doesn’t Suck It’s not too exciting that [Joe Grand] has a toothbrush that plays music inside your head. That’s actually a trick that the manufacturer pulled off. It’s that [Joe] gave his toothbrush an SD card slot for music that doesn’t suck. The victim donor hardware for this project is a toothbrush meant for kids called Tooth Tunes. They’ve been around for years, but unless you’re a kid (or a parent of one) you’ve never heard of them. That’s because they generally play the saccharine sounds of Hannah Montana and the Jonas Brothers which make adults choose cavities over dental health. However, we’re inclined to brush the enamel right off of our teeth if we can listen to The Amp Hour, Embedded FM, or the Spark Gap while doing so. Yes, we’re advocating for a bone-conducting, podcasting toothbrush. [Joe’s] hack starts by cracking open the neck of the brush to cut the wires going to a transducer behind the brushes (his first attempt is ugly but the final process is clean and minimal). This allows him to pull out the guts from the sealed battery compartment in the handle. In true [Grand] fashion he rolled a replacement PCB that fits in the original footprint, adding an SD card and replacing the original microcontroller with an ATtiny85. He goes the extra mile of making this hack a polished work by also designing in an On/Off controller (MAX16054) which delivers the tiny standby current needed to prevent the batteries from going flat in the medicine cabinet. Check out his video showcasing the hack below. You don’t get an audio demo because you have to press the thing against the bones in your skull to hear it. The OEM meant for this to press against your teeth, but now we want to play with them for our own hacks. Baseball cap headphones via bone conduction? Maybe. Update: [Joe] wrote in to tell us he published a demonstration of the audio. It uses a metal box as a sounding chamber in place of the bones in our head. Filed under: digital audio hacks, musical hacks ## March 26, 2017 ### Scores of Beauty #### The story of “string bending” in LilyPond String bending is a playing technique for fretted string instruments — typical of blues and rock music — where fretting fingers pull the strings in a direction perpendicular to their vibrating length. This will increase the pitch of a note by any amount, allowing the exploration of microtonality. The animated image on the left shows how a string bending is performed on a guitar. It requires a specific notation: the bending is usually represented on tablature by an arrowed line and a number showing the pitch alteration step (1/4 for a quarter, 1/2 for a half, 1 for a full bend, etc.). It’s a long standing feature request in LilyPond: issue 1196 was created on July 2010, almost 7 years ago, and collected more than €400 in bounty offers. During these years we’ve been able to draw bendings with the help of an external file written in 2009 by a LilyPond user, Marc Hohl. It worked quite well, but it was a kind of hack and therefore had some important limitations. When I happened to promote LilyPond, especially to tablature users, this missing feature was the “skeleton in the closet”. But something interesting is on the way. Last September Thomas Morley aka Harm, a LilyPond user and contributor, announced that he’s working on a new bending engraver, which overcomes those limitations and will hopefully be included in LilyPond in the near future. Let me tell you the whole story… ## bend.ly, a smart hack I know well the story, because it dates back to when I took my first steps as a LilyPond user. At beginning of 2009 I still was an (unhappy) user of Tuxguitar. I knew LilyPond, but I totally disliked the default tablature look (back then the default and unique tablature output available was what you get currently if you use tabFullNotation). Then I stumbled on an announcement on Tuxguitar forum saying that LilyPond was going to have support for modern tablature, which became the new default output starting with version 2.13.4. Immediately I quit Tuxguitar for LilyPond and never looked back. The author of these changes was Marc Hohl, a user who stepped into LilyPond development for the first time and managed, thanks to the great support from the lilypond-user mailing list (and his own Scheme skills), to bring modern tablature to LilyPond. It was again Marc who, a few months later, on August 2009, announced his first draft of bend.ly, an approach that built upon the engraving of slurs to print bending signs on staff and tablature. Despite the beauty and professional look of the bends, it was a hack and could not be considered for inclusion in the LilyPond source code. The main limitations of bend.ly were: • line breaks over bending notes were not supported • no easy way to let a hammer-on or pull-off follow a bending • need to manually adjust the distance between Staff and TabStaff in order to avoid collisions between the bending interval number and the staff In order to overcome these limitations, we needed a specific bend engraver, that is a dedicated tool which is able to engrave bends in their own right and not as modified slurs. In 2013 I decided to move bend.ly into a repository, so we could track the changes needed to make it work with new versions of LilyPond. We chose to use the openLilyLib snippets repository and its first location was notation-snippets/guitar-string-bending. Then it moved to the new location ly/tablature/bending.ily. The maintenance of this file passed on from Marc to Harm, who quickly fixed it every time something broke when using a recent development version of LilyPond. ## The new bending engraver Last September Harm posted the first draft of a new bend engraver, partly based on bend.ly and written entirely in Scheme. Allow me to make a brief “technical” digression. LilyPond input files can contain Scheme code, which can access the internals and change default LilyPond behaviour on the fly, i.e. without having to change the LilyPond binary. That’s why you can find power users of LilyPond who can code in Scheme. But most of the LilyPond engravers, at least so far, have been written in C++, a language used only by developers. The C++ requirement was one of the reasons why Marc could not write a real engraver seven years ago, as he was not able to code in this language. But in recent years the LilyPond internals responsible for building engravers improved, thanks to the great work of the core developer David Kastrup, and now writing an engraver entirely in Scheme is possible (or at least much easier than in the past). The new bend spanner engraver, though working well already, has not been proposed for inclusion in the LilyPond codebase, because Harm wants to take some more time to refine it before going through the code review process. The code is currently hosted at this git repository and you can get the compressed archive file of the latest version from the release page. If you prefer downloading it with git, here’s the command: git clone https://pagure.io/lilypond-bend-spanner.git Move the downloaded folder into a directory which is include-able by LilyPond. Let’s see a simple example: \version "2.19.55" \include "bending.ily" myMusic = \relative { <>^"Bend&release + pull-off" a8 c d\startBend dis\stopBend\startBend d\stopBend( c) a4 | <>^"Microtones bending" a4\startBend aih\stopBend a\startBend aisih\stopBend | <>^"Chord bending" <g~ d' g>8\startBend <g e' a>\stopBend % do not bend a note in a chord <\tweak bend-me ##f f a d>8\startBend <f ais dis>\stopBend\startBend <f a d>2\stopBend | } \score { \new StaffGroup << \new Staff { \clef "treble_8" \myMusic } \new TabStaff { \clef "moderntab" \new TabVoice \myMusic } >> \layout { indent = 0 ragged-right = ##f \context { % enable microtones \Score supportNonIntegerFret = ##t } \context { \Staff \omit StringNumber } } }  And here’s the output: Simple example of the main features of the bend spanner. More features can be seen in action in the test file test-suite/bend-test.ly, which demonstrates all the current capabilities of this engraver. ## Roadmap What to expect next? There’s not any public roadmap. It’s up to Harm to decide when his work is ready to be reviewed. In the meantime everybody interested in seeing bending included in LilyPond is encouraged to test it and report any feedback on the lilypond-user mailing list. This is going to be a game changer for LilyPond tablature users and I look forward to see this feature in LilyPond! ## March 23, 2017 ### open-source – CDM Create Digital Music #### The great sounds you’re making remind us why we make MeeBlip Getting in the zone is a beautiful thing – that feeling when music seems to almost play itself, when it really feels new. Just like you do a lot of preparation and practice as a musician to get there, when you make instruments, you’re endlessly learning how to make help people find that zone. And that’s ultimately why I feel lucky to be involved in making instruments as well as making music – with CDM generally, and with our own toes in the water, MeeBlip. Now, as it happens, people are making amazing things with the MeeBlip (alongside the other gear we talk about). Who says there’s too much music or too many musicians – or too many synths? Watching this, we want more of all of it. And so here you go – out of all the many jams, here are a few favorites that surprised us and that might inspire you. Don’t forget to join in. We ship MeeBlip worldwide direct from the workbench, where they’re tested and assembled by the person who designed them (James Grahame). In addition to our just-announced free editor, we’re offering a deal on everything you need for triode – cables and USB MIDI interface (Mac/Windows/Linux) – in a bundle, now with$40 off.

Get this –
triode starter bundle

– then enter STARTER as the coupon code. (while supplies last)

### Great recent jams

Some selections – and since the triode needs something to send it notes, some of the many sequencers you can use:

Olivier Ozoux reminds you (alongside Bjork) that you shouldn’t let poets lie to you, in this beautiful jam with MeeBlip triode alongside Squarp Pyramid, Waldorf Blofeld, and more:

This one has roller coasters in it. Sequenced with Arturia KeyStep:

Someone got into a trance state on, like, their porch — with just MeeBlip, transforming their backyard into a sort of alien ritual of sound:

Here’s MeeBlip anode being sequenced by Forever Beats – a MIDI sequencer I would otherwise not know about, honestly! (Looks great – buying!)

I hoped someone would use the Millennium Falcon-shaped Casio XW-PD1 as a sequencer, and here’s wonderful, melodic, trippy music doing just that —

Casio XW-PD1 sequencing the Twisted Electron AY3 and the Meeblip Anode. The AY3 gets a Behringer RV600 verb treatment, and the Anode gets a Moozikpro analog delay treatment. Drums coming from the XW-PD1. 8 patterns in all.

MegaMorph is a new prototype project with powerful, musical transformations between scenes. Here, MeeBlip is sounding plenty grimy atop a hypnotic, dreamy synth sea (subscribing and watching for more on this project):

Live demonstration of MegaMorph prototype for controlling and morphing complete setups via midi, here including
triode, volca and mfb sound parameters, XR-18 mixer levels and x0x bio-arpeggiator settings.

Sequencing: x0xb0x (own “bioarp” OS) + volca fm arpeggiator

Sound: meeblip triode, KORG volca fm, MFB tanzbaer lite, x0xb0x (x0xsh0p.de)

Control: MegaMorph (midi fighter twister + matlab scripts on minix mini pc + iConnect mio10), miditech keyboard

I’m going to close again with Olivier, whose inspired others to jam along by championing Jamuary. And I think that’s the whole point. While the rest of the industry worries how to produce stars, we can all learn from one another.

Get a MeeBlip now:

http://meeblip.comMeeBlip

MeeBlip triode Starter Bundle + code STARTER for \$40 off (like getting a USB MIDI interface, free)

And we’d love to hear from you – what music you’re making, and what the MeeBlip project could do for you, both as open source hardware and as a product line.

The post The great sounds you’re making remind us why we make MeeBlip appeared first on CDM Create Digital Music.

### digital audio hacks – Hackaday

#### The Hard Way of Cassette Tape Auto-Reverse

The audio cassette is an audio format that presented a variety of engineering challenges during its tenure. One of the biggest at the time was that listeners had to physically remove the cassette and flip it over to listen to the full recording. Over the years, manufacturers developed a variety of “auto-reverse” systems that allowed a cassette deck to play a full tape without user intervention. This video covers how Akai did it – the hard way.

Towards the end of the cassette era, most manufacturers had decided on a relatively simple system of having the head assembly rotate while reversing the motor direction. Many years prior to this, however, Akai’s system involved a shuttle which carried the tape up to a rotating arm that flipped the cassette, before shuttling it back down and reinserting it into the deck.

Even a regular cassette player has an astounding level of complexity using simple electromechanical components — the humble cassette precedes the widespread introduction of integrated circuits, so things were done with motors, cams, levers, and switches instead. This device takes it to another level, and [Techmoan] does a great job of showing it in close-up detail. This is certainly a formidable design from an era that’s beginning to fade into history.

The video (found after the break) also does a great job of showing glimpses of other creative auto-reverse solutions — including one from Phillips that appears to rely on bouncing tapes through something vaguely resembling a playground slide. We’d love to see that one in action, too.

One thing you should never do with a cassette deck like this is use it with a cassette audio adapter like this one.

Filed under: digital audio hacks, slider, teardown

## March 21, 2017

### rncbc.org

#### Vee One Suite 0.8.1 - A Spring'17 release

Great news!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are once again out in the wild!

Still available in dual form:

• a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
• a LV2 instrument plug-in.

The common change-log for this dot release follows:

• Fixed a probable old miss about changing spin-box and drop-down list not reflecting changes immediately into the respective parameter dial knobs.
• Fixed middle-button clicking on the dial-knobs to reset to the current default parameter value.
• Help/Configure.../Options/Use desktop environment native dialogs option is now set initially off by default.
• Added French man page (by Olivier Humbert, thanks).
• Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go, thrice again!

## synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.1 (spring'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

## samplv1 - an old-school polyphonic sampler

samplv1 0.8.1 (spring'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

## drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.1 (spring'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Enjoy && have fun ;)

## March 17, 2017

### Linux – CDM Create Digital Music

#### Steinberg brings VST to Linux, and does other good things

The days of Linux being a barren plug-in desert may at last be over. And if you’re a developer, there are some other nice things happening to VST development on all platforms.

Steinberg has quietly rolled out the 3.6.7 version of their plug-in SDK for Windows, Mac, iOS, and now Linux. Actually, your plug-ins may be using their SDK even if you’re unaware – because many plug-ins that appear as “AU” use a wrapper from VST to Apple’s Audio Unit. (One is included in the SDK.)

For end users, the important things to know are, you may be getting more VST3 plug-ins (with some fancy new features), and you may at last see more native plug-ins available for Linux. That Linux support comes at just the right time, as Bitwig Studio is maturing as a DAW choice on the platform, and new hardware options like the Raspberry Pi are making embedded solutions start to appeal. (I kind of hesitate to utter these words, as I know that desktop Linux is still very, very niche, but – this doesn’t have to mean people installing Ubuntu on laptops. We’ll see where it goes.)

For developers, there’s a bunch of nice stuff here. My favorites:

cmake support

VST3 SDK on GitHub: https://github.com/steinbergmedia/vst3sdk

GPL v3 license is now alongside the proprietary license (necessary for some open projects)

How ’bout them apples? I didn’t expect to be following Steinberg on GitHub.

The open license and Linux support to me suggest that, for instance, finally seeing Pure Data work with plug-ins again could be a possibility. And we’ll see where this goes.

This is one of those that I know is worth putting on CDM, because the handful of people who care about such things and can do something with them are reading along. So let us know.

More:

http://sdk.steinberg.net

Thanks, Spencer Russell!

The post Steinberg brings VST to Linux, and does other good things appeared first on CDM Create Digital Music.

### digital audio hacks – Hackaday

#### Neural Network Composes Music; Says “I’ll be Bach”

[carykh] took a dive into neural networks, training a computer to replicate Baroque music. The results are as interesting as the process he used. Instead of feeding Shakespeare (for example) to a neural network and marveling at how Shakespeare-y the text output looks, the process converts Bach’s music into a text format and feeds that to the neural network. There is one character for each key on the piano, making for an 88 character alphabet used during the training. The neural net then runs wild and the results are turned back to audio to see (or hear as it were) how much the output sounds like Bach.

The video embedded below starts with a bit of a skit but hang in there because once you hit the 90 second mark things get interesting. Those lacking patience can just skip to the demo; hear original Bach followed by early results (4:14) and compare to the results of a full day of training (11:36) on Bach with some Mozart mixed in for variety. For a system completely ignorant of any bigger-picture concepts such as melody, the results are not only recognizable as music but can even be pleasant to listen to.

MIDI describes music in terms of discrete events, and individual note starts and stops are separate events. Part of the reformatting process involved representing each note as a single ASCII character, thereby structuring the music more like text and less like keyboard events.

The core of things is this character-based Recurring Neural Network which is itself the work of Andrej Karpathy. In his words, “it takes one text file as input and trains a Recurrent Neural Network that learns to predict the next character in a sequence. The RNN can then be used to generate text character by character that will look like the original training data.” How did [carykh] actually use this for music? With the following process:

1. Gather source material (lots and lots of MIDI files of Bach pieces for piano or harpsichord.)
2. Convert those MIDI files to CSV format with a tool.
3. Tokenize and reformat that CSV data with a custom Processing script: one ASCII character now equals one piano key.
4. Feed the RNN with the resulting text.
5. Take the ouput of the RNN and convert it back to MIDI with the reverse of the process.

[carykh] shares an important question that was raised during this whole process: what was he actually after? How did he define what he actually wanted? It’s a bit fuzzy: on one hand he wants the output of the RNN to replicate the input as closely as possible, but he also doesn’t actually want complete replication; he just wants the output to take on enough of the same patterns without actually copying the source material. The processing of the neural network never actually “ends”; [carykh] simply pulls the plug at some point to see what the results are like.

Neural Networks are a process rather than an end result and have varied applications, from processing handwritten equations to helping a legged robot squirm its way to a walking gait.

Thanks to [Keith Olson] for the tip!

Filed under: digital audio hacks, musical hacks

### Audio, Linux and the combination

#### new elektro project 'BhBm' : Hydrogen + analogue synths

It has been a long, long time since i posted anything here !

Let me present you our newest elektro project BhBm (short for "Black hole in a Beautiful mind")

All drums and samples are done with H2.  Almost all bass and melody melodies are analogue synths controlled by H2 via MIDI.

Softsynths and FX are done using Carla and LV2 plugins.

I use H2 as a live sequencer in stacked pattern mode controlled by a BCR200 running, so there is no 'song', ony patterns that are enabled/disabled live > great fun !!

Check out our demo songs on Soundcloud :

Thijs

## March 16, 2017

### open-source – CDM Create Digital Music

Open a tab, design a new sound. Now you can, with a free Web editor for the MeeBlip. And it shows just how powerful the browser can be for musicians.

Watch:

And if you own a MeeBlip (triode or anode), give it a try yourself (just remember to plug in a MIDI interface and set up the channel and port first):
https://editor.meeblip.com/

Don’t own a MeeBlip? We can fix that for you:
https://meeblip.com/

Why a browser? Well, the software is available instantly, from anywhere with an Internet connection and a copy of Chrome or Opera. It’s also instantly updated, as we add features. And you can share your results with anyone else with a MeeBlip, too.

That means you can follow our new MeeBlip bot account and discover new sounds. It might be overkill with a reasonable simple synth, but it’s a playground for how synths can work in our Internet-connected age. And we think in the coming weeks we can make our bot more fun to follow than, um, some humans on Twitter.

Plus, because this is all built with Web technologies, the code is friendly to a wide variety of people! (That’s something that might be less true of the Assembly code the MeeBlip synth hardware runs.)

You can have a look at it here. Actually, we’re hoping someone out there will learn from this, modify it, ask questions – whatever! So whether you’re advanced or a beginner, do have a look:

https://github.com/MeeBlip/meeblip-web

All the work on the editor comes to us from musician and coder Ben Schmaus, based on an earlier version – totally unsolicited, actually, so we’re amazed and grateful to get this. We asked Ben for some thoughts on the project.

CDM: How did you get into building these Web music tools in the first place?

Ben: I had been reading about the Web MIDI and Audio APIs and thinking about how I might use them. I bought an anode Limited Edition synth and wanted a way to save patches I created. I thought it’d be cool and maybe even useful to be able to store and share patches with URLs, the lingua franca of the web. Being a reasonably capable web developer it seemed pretty approachable and so I started working on Blipweb. [Blipweb was the earlier iteration of the same editor tool. -Ed.]

Why the MeeBlip for this editor?

Well, largely because I had one! And the (admirably concise) quick start guide very clearly outlined all the MIDI CC numbers to control mappings. So it seemed very doable. Plus being already open source I thought it would be nice to contribute something to the user community.

What’s new in the new MeeBlip editors versus the original Blipweb?

The layout and design is tighter in the new versions. I added a very basic sequencer that has eight steps and lets you control pitch and velocity. It’s nice because you can produce sound with just a MeeBlip, MIDI interface, and browser. There’s also a simple patch browser that has some sample patches loaded into it that could be expanded in a few different ways in the future. Aside from the visible changes the code was restructured quite a bit to enable sharing between the anode and triode editors. The apps are built using JQuery, because I know it and it also had a nice knob UI widget. If I were starting from scratch today, I’d probably build the editors using React (developed by Facebook), which improves upon the JQuery legacy without over-complicating things.

Why do this in a browser rather than another tool?

There’s the practical aspect of me being familiar with web technologies. Combining that with the fact that Chromium-based browsers implement Web MIDI, the browser was a natural target platform. I’m not sure where Web MIDI is going. It’s obviously a very niche piece of functionality, but I also think it’s super useful to be able to pull up a web page and start interacting with hardware gear without having to download a native app. The ease of access is pretty compelling, and the browser is a great way to reach lots of OSes with minimal effort.

You also built this terrific Web MIDI console. How are you using that – or these other tools – in your own work and music?

The Web MIDI console is a tool to inspect MIDI messages sent from devices. I updated it recently after being inspired by Geert Bevin’s sendMIDI command line utility. So now you can send messages to devices in addition to viewing them. I often use it to see what messages are actually coming from my devices. I’ve written a few controller scripts for Bitwig Studio and the MIDI console has come in handy for quickly seeing which messages pads, knobs, sliders, etc. send. There are, of course, native apps that do this sort of thing, but again, it’s nice to just open a web page and have a quick look at a MIDI data stream or send some messages.

What was your background; how did you learn these Web technologies?

I studied music in college and learned just enough web dev skills through some multimedia courses to get a job making web pages back around 2000. It was more enjoyable than the random day jobs/teaching guitar lessons/wedding band gigs I was doing so I decided to pursue it seriously. Despite starting out in web/UI development, I’ve spent more time working on back-end services. I was an engineering director at Netflix and worked there in the Bay Area for five years before moving back to the east coast last summer. I’ve been spending more time working on music software lately and hope to find opportunities to continue it.

Did you learn anything useful about these Web technologies? Where do you think they’ll go next? (and will we ever use a Chromebook for MIDI?)

Well, if you want the broadest compatibility across browsers you need to serve your Web MIDI app over HTTPS. For example, Opera doesn’t allow MIDI access over HTTP. I’m not sure where it’s going, really. It’d be nice to see Web MIDI implemented in more browsers. People spend so much time in their browsers these days, so it seems reasonable for them to become more tightly integrated with the underlying OS. Though it’s a bit hard to find strong incentive for browser vendors to support MIDI. Nonetheless, I’m glad it’s available in Chrome and Opera.

I think Web MIDI apps work quite well as tools in support of other products. Novation’s browser apps for Circuit are really well done and move Web MIDI beyond novelty. I hope the MeeBlip editors do the same. I also like Soundtrap and think Web MIDI/Audio apps work well in educational contexts since browsers are by and large ubiquitously accessible.

Ed.: For more on this topic of SSL and MIDI access, Ben wrote a companion blog post whilse developing this editor:

Web MIDI Access, Sysex, and SSL

Why make these tools open source? Does it matter that the MeeBlip is open source hardware?

It absolutely matters that MeeBlip is open source. That’s the main reason I bought into it. I really like the idea of open and hackable products that let users participate in their further development. It’s especially cool to see companies that are able to build viable businesses on top of open products.

In the case of the editors, they’re (hopefully!) adding value to the product; there’s no competitive advantage in having a patch editor by itself. It makes sense to open source the tools and let people make and share their own mods. And maybe some of that work feeds back into the main code line to the benefit of the broader user base. I think open source hardware/software products tend to encourage more creative and vibrant user communities.

What other useful browser music stuff do you use? Any tips?

Hmm…the Keith McMillen blog has some good posts on using the Web MIDI API that I’ve referred to a number of times. And there’s a Google music lab site with some interesting projects. Although I don’t have a Circuit or reface synth, it’s nice to see Novation [see our previous story] and Yamaha (Soundmondo) with Web MIDI apps, and they look useful for their users. I’m curious to see what new things pop up!

Thanks, Ben! Yes, we’ll be watching this, too – developers, users, we’d love to hear from you! In the meantime, don’t miss Ben’s site. It’s full of cool stuff, from nerdy Web MIDI discussions to Bitwig and SuperCollider tools for users:

https://factotumo.com/

And see you on the editor!

## March 11, 2017

### MOD Devices Blog

#### MOD at NAMM 2017 – Recap

Greetings, fellow music freaks!

So you might have heard that we went to the NAMM show with MOD Devices. I spent the first few days around LA together with the Modfather, Gianfranco. Later on we met up with the rest of the team for a very busy yet very exciting week!

Early in the morning on the 16th of January we flew from Berlin to LAX. Upon arrival we discovered that our luggage was not loaded over to the switchover flight in Dusseldorf. Ouch, now we have to wait until Thursday (the evening of the first day of the show) before we get the equipment we need! We decided to take it easy, so we went and got our rental van and drove home, but not before eating an obnoxious amount of hot wings. We’re in America after all. This evening we simply did some grocery and essential supplies shopping.

When I woke up the next morning I went outside and my mood instantly changed: the beautiful California sky, the palm trees, quite the opposite of the cold Berlin I got used to.

After a nice breakfast in the sun we went out to grab some extra items from the stores because of the luggage issues. Shout out to Davier from Guitar Center Orange County for helping us out with all our PA and cabling needs! That was all for the day.

The next day we continued our quest to making our booth as awesome as possible. We started setting up that day. Later on, we met up with our ever-happy Adam, good vibrations and laughs all around! This evening we also got together with Derek and Dean (the most helpful NS/Stick player in the universe, who even uses his MOD Duo to charge his phone). To end the day, we had a great time and a lovely meal at a cantina in Fullerton.

Thursday: showtime! We got up early, and went straight to the convention center for the last bits of setup. Today was a very relaxed day, some interesting people stopped by, and the overall response seemed to be very positive about the MOD Duo. For me personally this was the first time meeting Alexandre in real life, since you might know that a lot of work happens on a remote basis inside MOD Devices. During the day there were small jams and improvisations done by our one and only Adam and Dean.

Friday: day two of the show. Besides the load of meetings that Gianfranco and Alexandre had to attend, this was actually a pretty chill day. When Alexandre came back from a meeting, Dean told us that Jordan Rudess, one of Alexandre’s big inspirations, was doing a demo at a booth really close by. Of course he had to go check that out! Most of the day was spent wowing people with the awesome MOD Duo, and having some cool improvisations as the day passed by. Dean had invited us to join him to the Stick Night at Lopez & Lefty’s, a gathering of really interesting musicians playing instruments that baffle a simple minded 6-string player like me. They were accompanied by some truly wonderful electronic percussion, and to top it off, they served a great margarita there!

Saturday: the busiest day of the show. They say that the Saturday always turns out interesting, and it did! We met loads of cool people, had a small jam with the Jamstick MIDI controller (there might be more on that in a later post!), ate a Viking hotdog and were visited by the legendary experimental guitarist Vernon Reid. The keyboard player for The Devin Townsend Project also stopped by our booth for a chat. At the end of a long day, we were pleasantly surprised when Stevie Wonder himself appeared in a booth nearby. The picture below shows me taking a picture of people taking a picture of people taking a picture of the legend.

When we got home from what seemed like the longest but best day yet, we decided that we needed to chill out a little bit. So we threw a small BBQ party in our backyard. Luckily our AirBnb had a big American-style grill!

Sunday: the last day of the show. It was raining like crazy and people were noticeably tired. Some people had even lost their voices completely. That did not hold us back from having the greatest jam session NAMM has ever experienced. Adam’s musical (evil-) genius joined forces with Sascha on the electric Harp and an amazing steampunk guy on the smartphone Ocarina. It was magnificent. If the footage survived you will be sure to see it later on. This day we also met up with Sarah Lipstate (Noveller) to introduce her to the MOD Duo. We’re looking forward to your creations Sarah! Later on in the day Gianfranco was interviewed by Sound on Sound. You can find footage of the interview here.

On Sundays the NAMM show shuts down a bit early, there is a crazy-quick teardown that happens in a matter of minutes from the moment it hits 17:00. We packed up, drove back to our apartment and decided to hit the Two Saucy Broads once again for some lovely pizza. Good night everybody.

On our last day we visited Hollywood’s Rockwalk at the Guitar Center on Sunset Boulevard. They have a couple of really awesome guitars lying around there! After returning our rental van all that was left to do was to go straight to the airport for our flight back to Berlin.

NAMM, you have been great, until we meet again!

• Jesse @ MOD HQ

PS: Special thanks go to Dean Kobayashi for helping us out tremendously during and before the show!

## March 10, 2017

### MOD Devices Blog

#### MOD Duo 1.3 Update Now Available

Greetings fellow MOD users!

Another software update has popped up, courtesy of our development team, who works tirelessly to bring all the features you have been asking for and then some!

So, the next time you open the MOD web interface you’ll receive an update notification, just click on the tooltip icon in the bottom-right when that happens, then ‘Download’ and finally ‘Upgrade Now’. Wait for a few minutes while the MOD updates itself automatically and enjoy your added features.

Here’s a description of the major improvements:

• Pedalboard Presets

Such an important and awaited feature, pedalboard presets have been a subject on the MOD forum for months. The MOD Duo is a relative revolution in terms of rig portability but several users felt they needed to be able to quickly and seamlessly change multiple plugins at the same time on stage. This was referred to as creating “scenes” inside a pedalboard. Now you can store values of parameters inside your pedalboards (such as the plugins that are on/off, their levels and other configs) and switch them all at once without having to load a new pedalboard. You can address this list of presets to any controller or footswitch!

• Click-less Bypass

Who likes noise when turning a plugin on and off? No one I’d wager. That’s why there’s a new feature in the LV2 plugin world called click-less bypass and we now support this designation on plugins that include it. This means you’ll be able to bypass plugins and avoid that little “click” noise. For now only “Parametric EQ” by x42 includes this feature, but it will soon get picked up more and more by developers.

Also, our True Bypass plugin, aptly called “Hardware Bypass”, is now available if you want to use it on your pedalboard and activate it via footswitch!

• ‘MIDI Utility’ Category

So… How about that ‘Utility” category on the pedalboard builder and cloud store? Pretty packed right? Well, since it has quickly got filled with MIDI utilities, we decided to keep things nice and tidy and have added the new ‘MIDI Utility’ category. That’s what happens when you’ve got hundreds of plugins

• Generic USB Joysticks as MIDI Devices

Personally, I’m not really sure I understand why someone would like to use a joystick as a MIDI controller but hey! A MOD device is about creative freedom, right? And we’re also about not getting stunted by proprietary technology. That’s why we couldn’t accept the fact that previously we could only use PS3 and PS4 joysticks over USB. Now thanks to @Azza (and some little extra integration…) we can use any joystick recognized by the MOD as a MIDI device. Buttons will send MIDI notes and CCs staring at #90 while Axis send MIDI CCs starting at #1. We’ll soon do a webinar on the subject of controllers for the stage, so this use case might spring up there and I will learn something!

There’s also quite a few more changes and tweaks. Visit our wiki to see all the changes since v1.2.1.

The next update will focus on the control chain controllers that are coming to the Kickstarter backers and that will be available to test by the community very soon. For more information, keep tuned on our forum!

Enjoy your pedalboards and the beautiful sounds that they make, share them, have fun with your added controllability, and keep helping us build the future of musical effects!

• Dwek @ MOD HQ

## March 05, 2017

### autostatic.com

#### Moved to Fuga

Moving my VPS from VMware to Fuga was successful. First I copied the VMDK from the ESXi host to a Fuga instance with enough storage:

scp some.esxi.host:/vmfs/volumes/storage-node/autostatic1.autostatic.cyso.net/autostatic1.autostatic.cyso.net-flat.vmdk ./

And then converted it to QCOW2 with qemu-img:

qemu-img convert -O qcow2 autostatic1.autostatic.cyso.net-flat.vmdk autostatic1.autostatic.cyso.net.qcow2

Next step was mounting it with guestmount:

guestmount -a /var/www/html/images/autostatic1.autostatic.cyso.net.qcow2 -m /dev/sda8 /mnt/tmp/

And changing some settings, i.e. network and resolvconf. When that was done I unmounted the image:

guestunmount /mnt/tmp

And uploaded it to my Fuga tenant:

openstack image create --disk-format qcow2 --container-format bare --file /path/to/images/autostatic1.autostatic.cyso.net.qcow2 --private autostatic1.autostatic.cyso.net.qcow2

Last step was launching an OpenStack image from this image, I used Ansible for this:

- name: Launch OpenStack instance
hosts: localhost
connection: local
gather_facts: no
vars:
os_flavor: c1.large
os_network: int1
os_image: 5b878fee-7071-4e9c-9d1b-f7b129ba0644
os_hostname: autostatic1.autostatic.cyso.net
os_portname: int-port200
os_fixed_ip: 10.10.10.200
os_floating_ip: 185.54.112.200

- name: Create port
os_port:
network: "{{ os_network }}"
fixed_ips:
name: "{{ os_portname }}"

- name: Launch instance
os_server:
state: present
name: "{{ os_hostname }}"
timeout: 200
flavor: "{{ os_flavor }}"
nics:
- port-name: "{{ os_portname }}"
security_groups: "{{ os_hostname }}"
floating_ips: "{{ os_floating_ip }}"
image: "{{ os_image }}"
meta:
hostname: "{{ os_hostname }}"

And a few minutes later I had a working VPS again. While converting and uploading I made the necessary DNS changes and by the time my VPS was running happily on Fuga all DNS entries pointed to the new IP address.

The post Moved to Fuga appeared first on autostatic.com.

## March 02, 2017

### Libre Music Production - Articles, Tutorials and News

#### Open Stage Control, v0.17.0 is released, now with MIDI support!

Open Stage Control has just seen the release of v0.17.0. Open Stage Control is a libre desktop OSC bi-directional control surface application built with HTML, JavaScript & CSS and run as a Node / Electron web server that accepts any number of Chrome / Chromium / Electron clients.

## March 01, 2017

### Scores of Beauty

#### LilyPond at the Google Summer of Code 2017

LilyPond has been mentoring students’ projects several times in the “Google Summer of Code” program in previous years, and this year we intend to take that to a new level: both the LilyPond and Frescobaldi projects will be able to accept students. (Frescobaldi is one of the two major LilyPond editing environments, the other being Denemo.) Students can now consider suitable projects to apply for, from March 20 to April 03 2017.

Google Summer of Code (GSoC) is a grant program funded by Google to support and drive forward (Free) Open Source Software development. Of course it is not a charity and serves an economic purpose for Google on the long run, but as a project itself there is no substantial catch to it, and many respected FLOSS projects such as for example Mozilla, LibreOffice and The GNU Project are happy to participate regularly.

From the program website:

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.”

The idea is to create additional value by funding students (who have to be enrolled as full-time students in an official university) to work with an Open Source project. It is obviously a nice thing for a student to be able earning money by programming instead of doing arbitrary temporary work over the summer break. And it is also a nice thing for Open Source projects to have someone get paid to do work that might otherwise not get done. But there’s more to it: the student does not only get paid for some work but is also getting familiar with Open Source development in general and may (hopefully) become a respected member of the community he/she is working with. And the project doesn’t only get some “things done” but hopefully a new contributor beyond GSoC.

#### LilyPond @ GSoC 2017

GNU LilyPond has been part of GSoC under the umbrella of The GNU Project, and we accepted students in 2012, 2015 and 2016. For 2017 GNU has been accepted again, so LilyPond is open for students’ applications as well. But in addition the Frescobaldi LilyPond editor has also applied as a mentoring organization for 2017 – and has been accepted too! So this year there is even more room and a wider range of options for students to join GSoC and get involved with improving LilyPond and its ecosystem.

If you think this is an interesting thing but don’t want to apply for it yourself please do us a favor and still share this information (e.g. the link to this post) as widely as possible. If you are not completely sure if you are eligible for participating you may start reading the program’s FAQ page. Otherwise you may directly start looking at the Project Ideas pages where we have put together a number of project suggestions with varying difficulties and required skills (programming languages) that we consider both very important for our development and suitable for a student’s summer project. Please note that students may as well come up with their own project suggestions (which would be welcome because it implies a deep personal interest in the idea). The following bullet list gives an overview of the range of possibilities while the project pages give much more detail.

LilyPond GSoC page

• Internal representation of chord structure
• Adopt the SMuFL font encoding standard
• Adding variants of font glyphs
• Create (the foundation for) a Contemporary Notation library
• (Re-)create a LilyPond extension for LibreOffice
• Testing/documentation infrastructure for openLilyLib
• Work on MusicXML import/export

Frescobaldi GSoC page

• Enhancing the Manuscript Viewer
• Improve MIDI/sound support
• Add support for version control (Git)
• Implement system-by-system handling of scores (partial compilation)
• Add a user interface for openLilyLib packages
• Improving the internal document tree representation
• Improve Frescobaldi’s MusicXML export (possibly in conjunction with the LilyPond project)

See you on the mailing lists!

## February 28, 2017

### Linux – CDM Create Digital Music

#### Someone at Bitwig is working with Ableton Link on GitHub

One postlude to the Bitwig announcement – yes, someone at Bitwig has forked Ableton Link support. Have a look:

The reason is interesting – ALSA clock support on Linux, which would make working with Link on that OS more practical.

Now, Ableton has no obligation to support Bitwig as far as integrating Link into the shipping version of Bitwig Studio. Proprietary applications not wanting to release their own code as GPLv2 need a separate license. On the other hand, this Linux note suggests why it could be useful – Bitwig are one of the few end user-friendly developers working on desktop Linux software. (The makers of Renoise and Ardour / Harrison MixBus are a couple of the others; Renoise would be welcome.) But we’ll see if this actually happens.

In the meantime, Bitwig are contributing back support for Linux to the project:

The post Someone at Bitwig is working with Ableton Link on GitHub appeared first on CDM Create Digital Music.

#### Bitwig Studio 2 is here, and it’s full of modulators and gadgets

Go go gadget DAW. That’s the feeling of Bitwig Studio 2, which is packed with new devices, a new approach to modulation, and hardware integration.

Just a few of these on their own might not really be news, but Bitwig has a lot of them. Put them together, and you’ve got a whole lot of potential machinery to inspire your next musical idea, in the box, with hardware, or with some combination.

And much as I love playing live and improvising with my hands, it’s also nice to have some clever machinery that gets you out of your usual habits – the harmonies that tend to fall under your fingers, the lame rhythms (okay, that’s me I’m talking now) that you’re able to play on pads.

Bitwig 2 is full of machinery. It’s not the complete modular environment we might still be dreaming of, but it’s a box full of very powerful, simple toys which can be combined into much more complex stuff, especially once you add hardware to it.

A few features have made it into the final Bitwig Studio 2 that weren’t made public when it first was announced a few weeks ago.

That includes some new devices (Dual Pan!), MIDI Song Select (useful for triggering patterns and songs on external hardware like drum machines), and controller API additions.

The controller API is a dream if you’ve come from (cough) a particular rival tool. Now you can code in Python, but with interactive feedback, and performance – already quite nice – has been improved.

I’m just going to paste the whole list of what’s new, because this particular update is best understood as a “whole big bag of new things”:

A re-conceptualized Modulation System
Numerous device updates, including dynamic displays and spectrum analyzers
Remote controls
VST3 support
Better hardware integration
Smart tool switching
Improved editor workflow
MIDI timecode support
Dashboard
Controller API improvements
…and much more

25 ALL NEW MODULATORS

4-Stage
AHDSR
Audio Sidechain
Beat LFO
Button
Buttons
Classic LFO
Envelope Follower
Expressions
HW CV In
Keytrack
LFO
Macro-4
Macro
Math
MIDI
Mix
Note Sidechain
Random
Select-4
Steps
Vector-4
Vector-8
XY

17 ENTIRELY NEW DEVICES
Audio FX

Spectrum analyzer
Pitch shifter
Treemonster
Phaser
Dual Pan

Hardware Integration Devices

MIDI CC
MIDI Program Change
MIDI Song Select
HW Clock Out
HW CV Instrument
HW CV Out

Note Effects

Multi-Note
Note Echo
Note Harmonizer
Note Latch
Note Length
Note Velocity

At some point, we imagined what we might get from Bitwig – beneath that Ableton-style arrangement and clip view and devices – was a bare-bones circuit-building modular, something with which you could build anything from scratch. And sure enough, Bitwig were clear that every function we saw in the software was created behind the scenes in just such an environment.

But Bitwig haven’t yet opened up those tools to the general public, even as they use them in their own development workflow. But the new set of modulation tools added to version 2 shouldn’t be dismissed – indeed, it could appeal to a wider audience.

Instead of a breadboard and wires and soldering iron, in other words, imagine Bitwig have given us a box of LEGO. These are higher-level, friendlier, simple building blocks that can nonetheless be combined into an array of shapes.

To see what that might look like, we can see what people in the Bitwig community are doing with it. Take producer Polarity, who’s building a free set of presets. That free download already sounds interesting, but maybe just as much is the way inw which he’s going about it. Via Facebook:

The modulation approach I think is best connected to Propellerhead Reason – even though Reason has its own UI paradigm (with virtual patch cords) and very distinct set of devices. But while I wouldn’t directly compare Reason and Bitwig Studio, I think what each can offer is the ability to create deeply customized performance and production environments with simple tools – Reason’s behaving a bit more like hardware, and Bitwig’s being firmly rooted in software.

There’s also a lot of stuff in Bitwig Studio in the way of modernization that’s sorely missing from other DAWs, and notably Ableton Live. These have accumulated in a series of releases – minor on their own, but starting to paint a picture of some of what other tools should have. Just a few I’d like to see elsewhere:

• Plug-in sandboxing for standard formats that doesn’t bring down the whole DAW.
• Extensive touch support (relevant to a lot of new Windows hardware)
• Support for expressive MIDI control and high-resolution, expressive automation, including devices like the ROLI hardware and Linnstrument (MPE).
• An open controller API – one that anyone can use, and that allows hardware control to be extended easily.
• The ability to open multiple files at once (yeah, kind of silly we have to even say that – and it’s not just Ableton with this limitation).
• All that, and you can install Bitwig on Linux, too, as well as take advantage of what are now some pretty great Windows tablets and devices like the Surface line.

There’s also the sense that Bitwig’s engineering is in order, whereas more ‘legacy’ tools suffer from unpredictable stability or long load times. That stuff is just happiness killing when you’re making music, and it matters.

So, in that regard, I hope Bitwig Studio 2 gets the attention of some of its rivals.

But at the same time, Bitwig is taking on a character on its own. And that’s important, too, because one tool is never going to work for everyone.

Find out more:
https://www.bitwig.com/en/bitwig-studio/bitwig-studio-2

The post Bitwig Studio 2 is here, and it’s full of modulators and gadgets appeared first on CDM Create Digital Music.

### ardour

#### Ardour 5.8 released

Although Ardour 5.6 contained some really great new features and important fixes, it turned out to contain a number of important regressions compared to 5.5. Some were easily noticed and some were more obscure. Nobody is happy when this happens, and we apologize for any frustration or issues that arose from inadequate testing of 5.6.

To address these problems, we are making a quick "hotfix" release of Ardour 5.8, which also brings the usual collection of small features and other bug fixes.

Linux distributions are asked to immediately and promptly replace 5.6 with 5.8 to reduce issues for Ardour users who get the program directly via their software management tools.

Read more below for full details ...

## February 27, 2017

### open-source – CDM Create Digital Music

#### Now you can sync up live visuals with Ableton Link

Ableton Link has already proven itself as a way of syncing up Ableton Live, mobile apps (iOS), and various desktop apps (Reason, Traktor, Maschine, and more), in various combinations. Now, we’re seeing support for live visuals and VJing, too. Three major Mac apps have added native Ableton Link support for jamming in the last couple of weeks: CoGe, VDMX, and a new app called Mixvibes. Each of those is somewhat modular in fashion, too.

Oh, and since the whole point of Ableton Link is adding synchronization over wireless networks or wired networking connections with any number of people jamming, you might use both apps together.

### CoGe

Here’s a look at CoGe’s Ableton Link support, which shows both how easy configuration is, and how this can be used musically. In this case, the video clip is stretching to the bar — making CoGe’s video clips roughly analogous to Ableton Live’s audio clips and patterns:

CoGe is 126.48€, covering two computers – so you could sync up two instances of CoGe to separate projectors, for instance, using Link. (And as per usual, you might not necessarily even use Ableton Live at all – it might be multiple visual machines, or Reason, or an app, or whatever.)

http://imimot.com/cogevj/

### VDMX

VDMX is perhaps an even bigger deal, just in terms of its significant market share in the VJ world, at least in my experience. This means this whole thing is about to hit prime time in visuals the way it has in music.

VDMX has loads of stuff that is relevant to clock, including LFOs and sequencers. See this screen shot for some of that:

Here are the developer’s thoughts from late last week:

VDMX and Ableton Link integration [Vidvox Blog]

Also, they reflect on the value of open source in this project (the desktop SDK is available on GitHub). They’ve got a complete statement on how open source contributions have helped them make better software:

That could easily be a subject of a separate story on CDM, but open source in visuals have helped make live performance-ready video (Vidvox’s own open Hap), made inter-app visuals a reality (Syphon), and has built a shader format that allows high-performance GPU code to be shared between software.

### Mixvibes

I actually forgot to include this one – I’m working o a separate article on it. Mixvibes is a new app for mixing video and audio samples in sync. It was just introduced for Mac this month, and with sync in mind, included Ableton Link support right out of the gate. (That actually means it beat the other two apps here to market with Link support for visuals.) It runs in VST and AU – where host clock means Link isn’t strictly necessary – but also runs in a standalone mode with Link support.

This is well worth a look, in that it stakes out a unique place in the market, which I’ll do as a separate test.

http://www.mixvibes.com/remixvideo

### Now go jam

So that’s two great Mac tools. There’s nothing I can share publicly yet, but I’ve heard other visual software developers tell me they plan to implement Ableton Link, too. That adds to the tool’s momentum as a de facto standard.

Now, getting together visuals and music is easier, as is having jam sessions with multiple visual artists. You can easily tightly clock video clips or generative visuals in these tools to song position in supported music software, too.

I remember attending various music and visual jams in New York years ago; those could easily have benefited from this. It’ll be interesting to see what people do.

Watch CDM for the latest news on other visual software; I expect we’ll have more to share fairly soon.

The post Now you can sync up live visuals with Ableton Link appeared first on CDM Create Digital Music.

### GStreamer News

#### GStreamer 1.11.2 unstable release (binaries)

Pre-built binary images of the 1.11.2 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## February 24, 2017

### GStreamer News

#### GStreamer 1.11.2 unstable release

The GStreamer team is pleased to announce the second release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

#### GStreamer 1.10.4 stable release (binaries)

Pre-built binary images of the 1.10.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

See /releases/1.10/ for the full list of changes.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## February 23, 2017

### autostatic.com

#### Moving to OpenStack

In the coming days I’m going to move the VPS on which this blog resides from VMware to the Fuga OpenStack cloud. Not because I have to but hey, if I can host my stuff on a fully open source based cloud instead of a proprietary solution the decision is simple. And Fuga has been around for a while now, it’s rock solid and as I have a lot of freedom within my OpenStack tenant I can do with my VPS whatever I want when it comes to resources.

Moving the VM will cause some downtime. I’ve opted for the solution to shut down the VM, copy it from the ESXi host on which it lives to a server with enough storage and the libguestfs-tools package so that I can do some customization and the python-openstackclient` package so that I can easily upload the customized image to OpenStack. Then I need to deploy an OpenStack instance from that uploaded image, switch the DNS and my server should be back online.

The post Moving to OpenStack appeared first on autostatic.com.

### GStreamer News

#### GStreamer 1.10.4 stable release

The GStreamer team is pleased to announce the fourth bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.