# planet.linuxaudio.org

## February 16, 2018

### GStreamer News

#### GStreamer 1.13.1 unstable development release

The GStreamer team is pleased to announce the first development release in the unstable 1.13 release series.

The unstable 1.13 release series adds new features on top of the current stable 1.12 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.13 release series is for testing and development purposes in the lead-up to the stable 1.14 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Full release notes will be provided in the near future, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.13 and 1.12 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

## February 15, 2018

### open-source – CDM Create Digital Music

#### Glitch Delay is a DIY module you can make – and it makes lovely music

Glitch Delay is a simple but fluid, musical effect for Eurorack modulars, with all the instructions free online. And it makes beautiful music, too.

Scott Pitkethly performs music as Cutlasses, and took on this project starting with the tiny, cheap, powerful Teensy USB prototyping platform. With that as the base, he created a Eurorack-compatible sound processor for modular systems. And, wow – sometimes simple, digital effects have their own special charm in music. Take a look – and love that lunchbox Euro case!

The firmware, schematics, and bill of materials and instructions are all online free, under an open license – so you can try creating one of these yourself, or even make and share your own modifications. The idea is to build some basic resources for people wanting to make their own audio processing modules.

https://github.com/cutlasses/GlitchDelayV2

You don’t have to restrict yourself to Eurorack hardware, though. Apart from potentially being of interest as desktop hardware or a guitar pedal, you can also run the same code as a plug-in your software host or on a mobile device, because it’s all built in JUCE. So, if you prefer a VST plug-in to a Eurorack module, you’ll want to check out the cross-platform plug-in:

http://www.cutlasses.co.uk/tech/glitch-delay/

Scott tells us a little bit about his inspiration:

To give you some background, I got into making my own hardware when I ended up building a MIDI pedal, so I could control Ableton whilst playing guitar. The ones on the market were either too big or too expensive. I’m a programmer by trade, so can handle coding, but still consider myself a bit of a novice when it comes to analogue electronics. That’s when I discovered Teensy, (similar to Arduino but smaller and faster). When I found out there was an audio library I started playing around with that. I made the AudioFreeze first:

I was inspired to make the Glitch Delay (probably needs a less generic name ) after seeing some of the demos from the monome aleph [“audio computer”], and it coincided nicely with the release of the Teensy 3.6 which has 4x the memory, and a faster processor (than the 3.2). During development, I found debugging on the Teensy quite difficult (no debugger), so I wrote some code in JUCE to allow me to code on the Mac, keeping the interfaces the same, so I could find a lot of the bugs on the Mac, with a proper debugger!

https://github.com/cutlasses/TeensyJuce

All the source code for the Glitch Delay is shared on GitHub, along with the schematics, etc. I’ve learned from lots of schematics that other people have shared, so I like to share in return.

I used the Glitch Delay extensively on my album, and also made this video to demonstrate some of the effects I’ve made processing an old autoharp.

Scott has begun a series explaining how this all works; part 1 is already up with more on software coming soon:

http://www.cutlasses.co.uk/tech/glitch-delay-how-it-works-part-1/

What I love about the end of this story is, all the craft of working with the technology at this low level makes what is to me gorgeous, personal music with a raw aesthetic that seems interwoven with how it’s made. That is, the assembly of the code and hardware is tied up with the textures that result, in eerie, meditative digital sonic surfaces against organic acoustic rhythms. Have a listen:

Clutching At Conscious by Cutlasses

https://cutlasses.bandcamp.com/releases

Thanks, Scott! We’ll be watching – and trying this out!

The post Glitch Delay is a DIY module you can make – and it makes lovely music appeared first on CDM Create Digital Music.

## February 12, 2018

### open-source – CDM Create Digital Music

#### Miss Nord Modular? This hack runs your patches as free software

The Nord Modular G2 is one of electronic music’s most beloved departed pieces of gear. Now it gets a second lease on life, for free – with Csound.

You’d be forgiven for not knowing this happened. The work was published as an academic paper in Finland last June, authored by three Russian engineers – one of whom works on nuclear physics research, no less. (It’s not the right image, but if you want to imagine something involving submarines, go for it. That’s where I want my next sound studio, inside a decommissioned nuclear sub from the USSR, sort of Thomas Dolby meets Hunt for Red October. But I digress.)

Anyway, Gleb Rogozinsky, Mihail Chesnokov, and Eugene Cherny, all of St. Petersburg, had a terrific idea. They chose to simulate the behavior of the Nord Modular G2 synth itself, and translate its patch files into use as Csound – the powerful, elegant free software that has a lineage to the first computer synth.

The upshot: patches (including those you found on the Web) now work on any computer, Mac, Windows, Linux, and Linux machines like Raspberry Pi – for free. And the graphical editor that lets you create Nord Modular patches just became a peculiar Nord-specific editor for Csound. (Okay, there are other visual editors for Csound, but that’s still cool, and the editor is still available for Mac and Windows free from the original manufacturer, Clavia.)

And best of all, if you have patches you created on the Nord Modular, now they’ll work for all eternity – or, rather, at least as long as human civilization lasts and keeps making computers, as I’m pretty sure Csound will remain with us for that. Let’s hope that’s… not a short period of time, of course.

pch2csd: an application for converting Nord Modular G2 patches
into Csound code
[Proceedings of the 14th Sound and Music Computing Conference]

Then give it a go – all you need is a machine that runs Python and copy-paste a couple of lines of code:

https://github.com/gleb812/pch2csd

Nord say they have no plans to bring back the hardware, but check the updates software on their site:

Thanks for the tip, Ted Pallas!

The post Miss Nord Modular? This hack runs your patches as free software appeared first on CDM Create Digital Music.

### Libre Music Production - Articles, Tutorials and News

#### Libre Music Production is back online!

Two weeks ago the linuxaudio.org server was compromised. As this is where LMP is hosted, our site went down.

Everything should be working again. Please report any issues using the contact form.

### MOD Devices Blog

#### Tutorial: Wireless Control Chain devices

Hi again to all!

Looking at the Control Chain Arduino shield, there seem to be unlimited possibilities.

However, a lot of things require a sensor to be moved or attached to the musician. Having extra wires hanging around makes everything a lot less practical. So, to make even more awesome things possible, let’s see how we can make a wireless Control Chain device with the Arduino shield.

## WHAT DO I NEED?

1. One Arduino Uno or Due
2. One Arduino Control Chain shield
3. Two serial Bluetooth modules
4. One ATTiny 85
5. One DIP8 IC socket
6. A sensor you want to use
7. Some Prototype PCB
8. Some wire
9. (Optional) Something to put your build in.

## SCHEMATIC

The schematic for this build consists of 2 different modules.

The schematic for the receiver is really straightforward. Just connect the Vin and GND of the Bluetooth module to the 5V and GND track on the CC shield. Then, connect the serial wires of the Bluetooth module to the Serial 3 port of the Arduino.

We do this because we need another hardware serial port. The reason we use a hardware serial port is that the interruption of the Control Chain library will cause problems with software serial libraries.

Schematic for the transceiver build

The schematic for the transmitter consists of 3 AA battery’s, an ATTiny 85, a HM-10 BLE module, and of course a sensor. The ATTiny will communicate with the HM-10 and the HM-10 will act as a wireless serial bridge. To power it all, we used three AA batteries summing up to 4,5V which is perfect for this application.

This setup leaves us with 3 available pins to use as digital I/O. The reset pin can also be used as I/O as long as you don’t pull it down to ground (or too close to ground). So, for example, you could use it for analog input but you have to keep it in a voltage range that is far enough above 0V so it won’t cause a reset.

## USING BLUETOOTH MODULES

Depending on what Bluetooth modules you’re planning to use, the next few steps may vary.

For this prototype, I’m using some HM-10 modules, which can be programmed with AT commands. This can be done with any serial interface connected to the Rx and Tx of the HM-10. To keep things simple, I used the same Arduino Due that we are going to use later on.

You can connect the serial 0 port of the Arduino to the serial port on the Bluetooth module and communicate with it through the Arduino’s IDE serial monitor.

First, you will need to query the native MAC address by sending “AT+ADDR?”. The module should return something like: OK+COND43639BBE467. In this case, the MAC address is D43639BBE467. You should first get the MAC address of both modules before continuing.

After that, you can make the modules connect. Let’s say we have module A and B. Send “AT+CON[address]” to module A with the MAC address of module B filled in and vice versa. The last thing is to send AT+ROLE0 to one and AT+ROLE1 to the other (for this application it doesn’t matter which one is master or slave).

## THE ATTINY 85 CODE

The ATtiny code consists of nothing more than an analog.Read() function for the sensor value and a Serial.write function to send the data to the Arduino.

To upload the code to the ATTiny you can use your Arduino as done in this tutorial.

The code of the Arduino is pretty straightforward and consists of the needed Control Chain configurations, a Serial read function to read the given values and a little bit of scaling so our values fit nicely in our set range.

## THE BUILD

1. Solder the opamp feet on the prototype PCB
2. Solder the pin-header for the Bluetooth module on the prototype PCB
3. Wire your sensor to the ATTiny and wire the modules as shown in the schematic
4. Place the ATTiny and the Bluetooth module
5. Solder the pin-header for the other Bluetooth module on the Control Chain Arduino shield
6. Place the other Bluetooth module on the Arduino.

1. Follow the instructions on our Github page and install the dependencies
4. Program your Bluetooth modules as explained above.

All done, time to test!

Connect the CC shield to your MOD DUO. If everything went well you should see a new CC device popping up.

Success!

Turn on the ATTiny 85 and HM-10, wait for the modules to connect (the LED on both modules should stop blinking) and assign the CC device to the actuator of your choice.

Address it like any actuator on the GUI

All done! You should have a working wireless Control Chain setup.

Now you may ask yourself why would I even need a wireless setup for Control Chain? Well, we found that it is extremely useful with a wide variety of sensors. For instance, we made one with an accelerometer like this:

Another example of where we used this build is to make a stretchable guitar strap. Using this setup and a piece of conductive fabric like the one seen here.

Our wireless guitar strap stretch sensor

Hopefully, this is helpful for all of you who wanted to see how to make a wireless setup with Control Chain and the Arduino Shield. Also don’t hesitate to come and talk to us on the forum if you have any questions about Control Chain devices, the Arduino or anything else!

## February 09, 2018

### rncbc.org

#### rtirq update - 2018 edition

Hi there!

Almost 3 years after the last time, rtirq has been updated!

No big deal whatsoever just that scheduling policies and priorities are now reset/stop to SCHED_FIFO and 50, respectively. In fact there's no urge to update at all; the previous versions are still good to have around ;) It's just that things get a bit more politely correct on "reset" and/or "stop" modes.

The original packages available here:

rtirq-20180209.tar.gz
rtirq-20180209-36.src.rpm
rtirq-20180209-36.noarch.rpm

nb. the rtirq init-script/systemd-service only makes sense on real-time preemptive (PREEMPT_RT) or threadirqs enabled GNU/Linux kernels.

Cheers && Enjoy!

## February 06, 2018

### linux-audio « WordPress.com Tag Feed

#### Setting up analog surround sound on Ubuntu Linux with a 3 3.5mm capable sound card

A while back, I received the Logitech Z506 Speaker system, and with Windows, setting it up was a pretty plug and play experience. On Linux, however, its’ a wholly different ballgame. For one, there’s no Realtek HD Audio control panel here, so what gives? How do you around this problem?

Introducing the tools of the trade:

You’ll want to use a tool such as hdajackretask , pavucontrol and pavumeter for the pin re-assignments and audio output monitoring afterwards respectively. The tools are installed by running:

sudo apt-get install alsa-tools-gui pavumeter pavucontrol


When done, launch the tool with administrative privileges as shown:

gksudo hdajackretask


From here, you’ll then need to re-assign each required pin. Note that this tool, depending on your sound card, will most likely detect them by the color panel layout (see the back of your card and confirm if its’ pins are color coded) or by the jack designator.

Either way, when you’re done and you select “Apply”, you’ll need to reboot and the settings will apply on the next startup.

Before you reboot, confirm that pulseaudio is configured to utilize the channel layout as desired.

Of note is that for /etc/pulse/daemon.conf , the following changes must be made (with your preferred text editor):

(a). For 5.1 channel sound, set: default-sample-channels = 6

(b). Ensure that enable-lfe-remixing is set to yes.

(c). The default channel map option for 5.1 audio should be set as:

front-left,front-right,lfe,front-center,rear-left,rear-right


How the tool works:

The tool generates a firmware patch (under /lib/firmware/hda-jack-retask.fw ) entry that’s also called up by a module configuration file (under /etc/modprobe.d/hda-jack-retask.conf or similar) , whose settings are applied on every boot. That’s what the “boot override” option does, overriding the sound card’s pin assignments on every boot. To undo this in the case the configuration is no longer needed, just delete both files after purging hdajackretask.

An example:

To get the Clevo P751DM2-G‘s Audio jacks to work with the Logitech Z506 surround sound speaker system that uses three 3.5mm jacks as input for 5.1 surround sound audio, I had to override the pins as shown in the generated configuration file below (confirm with the screen shots attached at the bottom for my use case, your mileage may vary depending on your exact sound card):

(a). Contents of /lib/firmware/hda-jack-retask.fw after setup:

[codec]
0x10ec0899 0x15587504 0

[pincfg]
0x11 0x4004d000
0x12 0x90a60140
0x14 0x90170110
0x15 0x411111f0
0x16 0x411111f0
0x17 0x01014010
0x18 0x01014011
0x19 0x411111f0
0x1a 0x01014012
0x1b 0x411111f0
0x1c 0x411111f0
0x1d 0x40350d29
0x1e 0x01441120
0x1f 0x411111f0


(b). Contents of the /etc/modprobe.d/hda-jack-retask.conf file after setup:

# This file was added by the program 'hda-jack-retask'.
# If you want to revert the changes made by this program, you can simply erase this file and reboot your computer.


Then rebooted the system. Confirming the successful override by running grep on dmesg on boot:

dmesg | grep hda-jack-retask


Output:

[    5.183912] snd_hda_intel 0000:00:1f.3: Applying patch firmware 'hda-jack-retask.fw'
[    5.184524] snd_hda_intel 0000:01:00.1: Applying patch firmware 'hda-jack-retask.fw'


Confirming the 3.5mm audio jack connections to the sound card on the laptop/motherboard setup:

On the rear of the Logitech system, all the I/Os are color coded. In my case, I swapped the GREEN line with the YELLOW line such that the GREEN line feed would correspond to the Center/LFE feed, as it does on Windows under the Realtek HD Audio manager panel. Then, on the computer, I connected the feeds in the order, top to bottom: Yellow, Green then Black at the very end.

Final step after reboot to use the new setup:

Use pavucontrol (search for it in the app launcher or launch from terminal) and under the configuration tab, select the "Analog Surround 5.1 Output" profile. This is important, because apps won’t use your speaker layout UNTIL this is selected.

When done, you can verify your setup (as shown below) with the sound settings applet on Ubuntu by running the audio tests. Confirm that audio is routed correctly to each speaker. If not, remap the pin layout again using hdajackretask and retest again.

Screen shots of success:

As attached:

Now enjoy great surround sound from your sound card.

## February 03, 2018

### blog4

#### Tina Mariane Krogh Madsen noise concert Berlin

Tina Mariane Krogh Madsen is playing this Sunday, the 4. Februrary in Berlin at Noiseberg:

http://noiseberg.org

Atelier Äuglein
Oppelner Strasse 12, 10997 Berlin

Sonntag 4. Feb 17:00

Tina MK Madsen — wearable sounds /denmark
http://tms.tmkm.dk/

Giovanni Verga — recycled electroacoustics /italy
https://fieldoscope.bandcamp.com/

Pollution — intangible blanket /italy & netherlands
https://thepollution.bandcamp.com/

Tonesucker — grave digging drone /united kingdom
https://onomaresearch.bandcamp.com/album/memento-mori

Eric Wong & Mark Alban Lotz — vibrating molecules / hong-kong & germany
https://soundcloud.com/ericszehonwong
http://www.lotzofmusic.com/

--

Danish visual artist and researcher Tina Mariane Krogh Madsen primarily works with performance art, sound and open technology. Her piece Little Fists weaves together motion-triggered sounds, highlighting both patterns and freak occurrences, ultimately generating dark visions populated with slowly evolving textures and anguishing sonic artefacts.

A careful navigation in hazardous waters punctuated by abrupt transformations, the music of British drone band Tonesucker layers electric guitars, modular synthesizers, self-made electronic instruments and field recordings into intense mythogeosonic soundscapes—a map of pulses, neural noises and indistinct chimes. There is no telling whether any of this is real.

Basing their performance on the digital/analog dichotomy, Pollution force opposites to coexist. Natural and artificial, order and chaos contaminate each other in a continuous flow or feedback. Results a total fusion and mutual contagion—sounds becoming both cause and effect, a vibrating unit contracting in all directions. The duo's compositions are inspired by formal techniques found in century-old cinema and surrealist painting, as well as concrete music, electronic synthesis and free composition.

Electronic musician Giovanni Verga manipulates synthesizers and self-built instruments into organic, repetitive musical images reflecting as electroacoustic fata morgana. Faint, subtle and somewhat miraculous, his sounds function as a whole, from sources difficultly individuated, generating new living entities the time of a performance.

Composer and flutist Mark Alban Lotz and Guitarist Eric Wong join their skills the time of an improvised piece, during which subtle melodies made of vibrating molecules are expected to be generated and degenerated.

--

Expected Schedule
5.00 = Doors Open
5.30 = Bar — Giovanni Verga
6.15 = Studio — Tina Madsen
6.45 = Bar — Tonesucker
7.15 = Studio — Pollution
8.00 = Bar — Eric Wong & Mark Alban Lotz

--

Noiseberg is a monthly, non-profit event. You may make a donation to the musicians—It isn't mandatory but qualifies you as patron of the arts, which is nice.

## February 02, 2018

### Scores of Beauty

#### OOoLilyPond: Part 2 â€“ Optimizing

In my previous post I introduced OOoLilyPond (OLy), a relaunched LibreOffice extension to integrate LilyPond music fragments in text documents, drawings and presentations. It covered rather basic topics: downloading, installing and first steps, while today’s post will discuss the more advanced topics of customization for include files and vector graphics.

# Include files

Despite being designed for “small” snippets that don’t exceed one page, OLy can be used for demanding tasks:

Using include files helps keeping your main LilyPond file clear and comprehensible. If you have definitions for functions or variables that you use in multiple LilyPond documents, it’s always a good option to have them in a separate include file. For example, openLilyLib entirely relies on include files.

When including a file that is not located in the same folder as the main .ly file (or in one of LilyPond’s default folders), its path must be specified. Apart from providing the absolute path in every include statement, there is a more elegant solution: LilyPond allows to specify a list of folders where it looks for include files. IDEs like Frescobaldi make use of that option.

In the OLy Config dialog, you can specify a string containing one or more of such paths. That string may contain one or multiple include statements as described in the LilyPond usage manual: -I"path"

For example, if I need to specify two paths C:\Users\Klaus\Documents\MyScores\ and D:\MyLibrary\, the resulting string reads:
-I"C:/Users/Klaus/Documents/MyScores/" -I"D:/MyLibrary/
Note that LilyPond expects the paths to be written with forward slashes, even in Windows. If you use backward slashes, OLy will replace them for you.

# Graphic file formats

By default, OLy uses .png graphic files to be inserted into LibreOffice. They are easy to handle, but being a bitmap format, they don’t offer the best image quality:

Bitmap images have a certain resolution given in dpi (dots per inch). The above picture is an excerpt of a 150dpi .png file that shows a simple {bes'4} at a staff size of 20 (LilyPond’s default value).
When magnified like this, the quality drawbacks are obvious.

In its config dialog, OLy lets you specify a resolution for .png files.
Increasing the image resolution will also increase its quality. By default, a value of 300dpi is used, which will make the same excerpt look like this:

Modern printers have a resolution of at least 600dpi. Therefore, this would be the minimal value to get usable print results:

But still you can see that it is a rastered image. The transitions from black to white look somewhat blurred.
Better quality can be achieved with vector file formats. They use polygons and curves to describe the outline of every object contained in the file. Therefore they provide perfectly sharp edges at any level of magnification:

To keep up with that high quality, one would have to further increase png resolution, which, however, would make the file size explode. The tiny examples above had 2248 bytes (150dpi), 4395 bytes (300dpi) and 9076 bytes (600dpi) as their file sizes, whereas the .svg version had 3568 bytes: Only the worst of the above .png files was smaller.

LilyPond can produce various vector file formats: Apart from .pdf, certainly its most important format, also .ps, .eps and .svg are available. However, LibreOffice cannot handle all of them:
.pdf images can be imported into LibreOffice as of version 5.3, but embedded fonts are not (yet?) recognized, hence all musical glyphs would be missing.
.eps images can only be used in OpenOffice, with the further limitation that they are not visible on screen and in pdf output (at least, for Windows there’s a workaround). In LibreOffice, they cannot be used at all.
Neither LibreOffice nor OpenOffice can import .ps files.

Finally, there are .svg files which are easily mastered by LibreOffice (not by OpenOffice – another reason to switch to LO).
In OLy’s Config dialog, you will also find a “Format” dropdown box that you can set to “svg” instead of “png”. However, for using this format with OLy, two important things have to be considered:

## 1. No automatic cropping

Many templates in OLy make use of include "lilypond-book-preamble.ly". This will not work with the .svg backend. Therefore, for every template there is a “[SVG]” version that works without lilypond-book-preamble. That means, however, there’s no automatic cropping.
Those templates use the Line Width field for “paper” width, and a “paper” height must be specified as well:

After compiling, the resulting image will have exactly the intended dimensions:

To get rid of the unnecessary white margins, one could adjust the width and height values by trial-and-error and recompile several times. But I think it’s much easier to manually crop the image by selecting it and hitting the appropriate button (or right-clicking it and choosing the “Crop” command from the context menu):

The green dots turn into red cropping marks that can be dragged to a new position:

When editing an existing OLy object, you won’t always have to repeat the cropping procedure. In the main OLy window, you can choose to keep the object’s actual size and crop settings:

In the Config dialog, you can specify if you want this option to be turned on or off by default. There are independent settings for Writer and Impress/Draw.

## 2. Font replacement

If your musical snippet contains some text markup, probably it will happen to you that a .svg image will not be displayed with the original font. Instead, some replacement font will appear:

For the above picture, a simple {c1^"What does this look like?"} has been entered using the Default [SVG] template, first rendered as .png image, second as .svg.

Now what has happened here?

In most cases, LilyPond is used to create .pdf files. Any font that is used will be embedded in the .pdf document. That means, it is displayed correctly, no matter if the font is installed on the target system or not.
This is not the case when creating .svg files: Having them displayed as intended requires to have the fonts installed on the system.

The thing is, even on your computer, LilyPond can use its own fonts without them being installed, so probably they are not. You will find those font files in their respective folder
on Linux systems:
/usr/share/lilypond/2.18.2/fonts/otf/ (whereas 2.18.2 might have to be replaced by your actual version number),
on Windows systems:
inside your LilyPond program folder (typically C:\Program Files (x86)\LilyPond\ or C:\Program Files\LilyPond\) in the subfolder usr\share\lilypond\current\fonts\otf\.

Which font files are we looking for? That depends on your LilyPond version.

Version 2.18.2 and earlier
According to the 2.18 Notation Reference, the roman (serif) font defaults to “New Century Schoolbook”.
Well, to be exact, it’s the four files CenturySchL-Bold.otf, CenturySchL-BoldItal.otf, CenturySchL-Ital.otf and CenturySchL-Roma.otf which will show up as “Century Schoolbook L” font family.

Installing those four font files on your system might do the trick. On Linux (Ubuntu Studio 16.04) it worked well for me, but on Windows 7 only the bold and italic version were available to the system. If that applies to you as well, just continue reading. We’re not yet at the end…

In addition, the 2.18 Notation Reference tells that there are no default fonts for sans and typewriter, so different computers will produce different output. If you want to take precautions against that, also continue reading.

Version 2.19
In the 2.19 Notation Reference you can see that LilyPond now provides fonts for all three font families: roman, sans and typewriter now default to “TeXGyreSchola”, “TeXGyreHeros” and “TeXGyreCursor”.
The first thing to do is installing them. Therefore, in your otf folder shown above, locate twelve files matching texgyre*.otf and install them on your computer. They seem to work without problems, even on Windows.

Users of Version 2.18 can download those font families from the web, e.g.:
www.fontsquirrel.com/fonts/tex-gyre-schola
www.fontsquirrel.com/fonts/tex-gyre-heros
www.fontsquirrel.com/fonts/tex-gyre-cursor

Unfortunately, the developers decided to make another change: .svg graphic files created by LilyPond 2.19 no more contain font names by default. Instead, they use aliases like “serif”, “sans-serif” and “monospace”. While there might be some reasons for that decision, we are therefore required to take another step (LilyPond 2.18 users who want to use the new fonts also should follow these instructions):

We now have to explicitely specify the font families. Therefore we need to go to the template folder (its location can be viewed in the OLy Config dialog) and manually edit the template files that have [SVG] in their name.
Inside the paper{...} section, there is a paragraph that has been prepared for that purpose:

% If LilyPond's default fonts are not installed and therefore "invisible" to other applications,
% you can define a replacement font here:

%{
#(define fonts
(make-pango-font-tree
(/ staff-height pt 20)))
%}

The command itself is enclosed in block comment signs %{ and %}. Putting a simple blank space between % and { will turn the first into an ordinary comment, and the code between them will become visible for LilyPond:

% {
#(define fonts
(make-pango-font-tree
(/ staff-height pt 20)))
%}

Now save the template and repeat the procedure with the other [SVG] templates.

If your templates date from OLy 0.5.0 to 0.5.3, the paragraph will rather look like this:

% If LilyPond's default fonts are not installed and therefore "invisible" to other applications,
% you can define a replacement font here:

% {
% for LilyPond 2.19.11 and older, it only works like this:
#(define fonts
(make-pango-font-tree
"sans-serif"
"monospace"
(/ staff-height pt 20)))
%}

In that case, just replace it by the code given above.

Phew! Now you should be able to use that SVG feature without further issues.

# Translation

One last thing I’d like to mention in this post is OLy’s support for different languages. The language of the user interface can be changed by selecting a language file in the Config dialog.

At the moment, only English and German are available. OLy will offer you all files that it can find in the language folder (its location can be found below in the Config dialog). Those language files contain all the strings that OLy needs for dialogs and messages.

In case you think that your native language is missing here, and you feel like contributing some work, you can translate one of those files into your language. They contain some helpful comments (that don’t need to be translated), so you will easily find your way.
If you are interested in doing such a work, you can contact me via lilypond.1069038.n5.nabble.com/user/SendEmail.jtp?type=user&user=4585 or just leave a comment here on this site.

## January 30, 2018

### rncbc.org

#### Qtractor 0.8.6 - A Winter'18 Release!

Hey there!

Qtractor 0.8.6 (winter'18 beta) is out!

Not just special, maybe not even great or awesome but kind of huge thanks to Andreas Müller and Holger Dehnhardt for their intrepid contributions in code for this one significant notch up towards mythical v1.0 -- hell will break loose and freeze in the process, mind you and then, who cares? :)

As they say, and without further ado...

Change-log for this epic release:

• Added brand new option to deactivate plugins only if they can produce sound cf. main menu Track/Auto Deactivate (by Andreas Müller aka. schnitzeltony, thanks).
• Workaround native file dialogs hang up by setting parent widget to NULL; it should be noted that dialogs now get an own entry in the task-bar (also by Andreas Müller aka. schnitzeltony, thanks).
• Added ARM NEON acceleration support (by Andreas Müller aka. schnitzeltony, thanks).
• Track count "limit" and a "Delta" mode flag, for momentary and encoded controllers support, have been added to MIDI Controllers generic mapping (cf. View/Controllers...; after an original pull-request by Holger Dehnhardt, thanks).
• A little hardening on the configure (autoconf) macro side.
• Pinned current/hi-lighted track dangling after removal.
• An anti-flooding timer is now in place in MIDI Controller assignment (aka. MIDI learn) dialog.
• Add MMC Track input monitor support.
• New user preference option: View/Options.../General/Options /Reverse keyboard modifiers role (Shift/Ctrl), applied to main transport re-positioning commands: Transport/Backward, Forward, etc.
• VST Time/Transport information is now also updated as on playing when in audio export aka. freewheeling mode.
• LXVST_PATH environment variable now takes precedence over VST_PATH as Linux-native VST plug-ins search path.
• MIDI Controllers mapped to non-toggling shortcuts now work as one-shot triggers, independent of MIDI event value.

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net
https://qtractor.sourceforge.io

Project page:

http://sourceforge.net/projects/qtractor

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help wanted, always!):

http://sourceforge.net/p/qtractor/wiki/

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun.

## January 27, 2018

### blog4

#### Behringer

Richard D. James gave in the 90s as one of his infamous answers to the question which gear Aphex Twin uses, the statement that he only uses Behringer. Obviously a joke back then because the German company didn't offered instruments that time except the case no-input mixers are your thing. Fast forward to 2018, its a different picture now. With two different synthesizers released, two more in the pipeline and additionally their so-called leak and announcement, that drum machines are also planned, a whole Behringer branded studio becomes doable.

http://www.synthtopia.com/content/2018/01/27/behringer-neutron-analog-semi-modular-synthesizer/

## January 25, 2018

### KXStudio News

#### Carla 2.0 beta6 is here!

Hello again everyone, I am glad to bring you the 6th beta of the upcoming Carla 2.0 release.
It has been over one year since the last Carla release, it was about time. :)
This should be the last beta for the 2.0 series, as next one is planned to be release candidate 1.

There were quite some changes under the hood, mostly in a good way.
The trade-off for users is that this means losing some features, the biggest ones being VST3 and AU plugin support.
The way audio and MIDI devices are handled on Windows and macOS also changed, no longer having dynamic MIDI ports.
See the previous post about Carla to get more details on the "breaking changes".

But let's move on with the good stuff!
Here are some of the highlights for this release:

### Transport controls and Ableton Link support (experimental)

Previous releases of Carla had basic time controls already, but it was quite basic and lacked options for JACK transport and BPM control.
Now JACK transport is optional, transport works for non-JACK drivers and BPM can be adjusted manually.
Ableton Link support was added in was well, as another way to sync transport. It was not extensively tested though.
Also note that, due to compiler support, the current Carla macOS builds do not support Link.

Transport can misbehave when rolling back or forwards, so this feature is still classified as experimental.
The plan is to have transport stabilized when the final 2.0 version is released.

### Tweak of settings page

Carla's settings dialog received an overhaul.
Everything that was deemed unstable was moved into a new 'experimental' page, and disabled by default.
So in order to use plugin bridges for example, you need to first enable experimental features, then the bridges.
The (experimental) features mentioned on this article all have to be enabled in the same way too.
Last but not least, a page dedicated to Wine settings (wine-prefix, wine startup binary, RT variables) was added.

### Load of JACK applications as plugins (Linux only, experimental)

This is a big one... :)
Initially just an idea that became an ugly hack/test for private use only, I soon realized it had great potential.
So I split the code used for plugin bridges and made it more generic so it could be re-used for such features.
And here we have it, JACK applications running as regular plugins inside Carla - including showing/hiding their main interface.
Applications also receive JACK transport as rolling in the host.

In this mode Carla basically becomes a self-contained JACK server, and exposes a special libjack to the client.
The client connects to Carla believing it's actually connecting to "JACK", as Carla implements libjack API through its plugin bridge mechanism.
Within Carla you first define a fixed number of audio and midi ports at the start.
Ports are allocated dynamically on the plugin side, but get mixed down at the end to the number of outputs selected.
This is a nice workaround against clients that dynamically register their ports, sometimes with random names too.
With Carla jack-apps-as-plugins method, the client ports are persistent.

The full libjack API is not implemented though, only the important parts, in order to get most applications running.
The most notable missing calls are related precise timing information and non-callback based processing.
Also no session management is implemented at the moment.
But, even without this, stuff like audacity, lmms, hydrogen, renoise and vlc work.

This is a work in progress, but already working quite well considering how new it is.

### Export any loaded plugin or file as a single LV2 plugin (experimental)

Another big feature of this release is the possibility to export any plugin or sound file loaded in Carla as its own self-contained (LV2) plugin.
This can really be any regular plugin, a sound bank (e.g. an SFZ file), a plugin bridge or even JACK application.
The exported plugin will run with the smallest amount of wrapping possible between the host and the carla loaded plugin.
Carla will not appear at all, triggering the "show ui" on the host will show the actual plugin UI.
***Note that the exported plugins are not portable! They require Carla to be always installed on the same location.***

Audio, MIDI, transport information, custom UI are fully working already.
The only missing feature at the moment is LV2 state, which needs to map to DSSI configures, VST chunks and other stuff.
Although working for non-Linux systems, this was not tested.
Testing of this feature in general is very appreciated.

### FreeBSD and other non-Linux systems

After the removal of the juce library from the code-base (as discussed before), Carla was free to support more than just the big 3 OSes.
With the help of the community, Carla is now available to install on FreeBSD through its ports system.
I was able to build and install it myself as well, and actually make good noise on a BSD system. Neat! :)
It's also now possible to build Carla for GNU/Hurd and HaikuOS as well, and I imagine for even more systems if one so desires.
If this is something you're interested in and need some help, let me know.

### Other changes

There are quite a lot of other smaller changes made in Carla since beta5, these include:

• Added carla-rack no-midi-out mode as plugin
• Allow drag&drop of plugin binaries into Rack view
• Auto-detect wine-prefix for plugin bridges
• Expand usable MIDI keyboard keys a little (Z-M plus Q-P for 2 full octaves and 5 extra keys)
• Implement parameter text for plugin bridges
• Implement "Manage UIs" option for macOS and Windows
• Place more parameters per tab in editor dialog
• Show active peaks and enable keyboard for carla-rack group in canvas
• Knobs are now controlled in a linear way
• Previous experimental plugins removed, and carla-zynaddsubfx no longer exported
• Rack view can handle integeter knobs properly
• Save and restore canvas positions (standalone only for now)

### Special Notes

• Carla as plugin and Carla-Control are still not available for Windows, likely won't be done for v2.0.

If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.

## January 18, 2018

### digital audio hacks – Hackaday

#### Recreating the Radio from Portal

If you’ve played Valve’s masterpiece Portal, there’s probably plenty of details that stick in your mind even a decade after its release. The song at the end, GLaDOS, “The cake is a lie”, and so on. Part of the reason people are still talking about Portal after all these years is because of the imaginative world building that went into it. One of these little nuggets of creativity has stuck with [Alexander Isakov] long enough that it became his personal mission to bring it into the real world. No, it wasn’t the iconic “portal gun” or even one of the oft-quoted robotic turrets. It’s that little clock that plays a jingle when you first start the game.

Alright, so perhaps it isn’t the part of the game that we would be obsessed with turning into a real-life object. But for whatever reason, [Alexander] simply had to have that radio. Of course, being the 21st century and all his version isn’t actually a radio, it’s a Bluetooth speaker. Though he did go through the trouble of adding a fake display showing the same frequency as the one in-game was tuned to.

The model he created of the Portal radio in Fusion 360 is very well done, and available on MyMiniFactory for anyone who might wish to create their own Aperture Science-themed home decor. Though fair warning, due to its size it does consume around 1 kg of plastic for all of the printed parts.

For the internal Bluetooth speaker, [Alexander] used a model which he got for free after eating three packages of potato chips. That sounds about the best possible way to source your components, and if anyone knows other ways we can eat snack food and have electronics sent to our door, please let us know. Even if you don’t have the same eat-for-gear promotion running in your neck of the woods, it looks like adapting the model to a different speaker shouldn’t be too difficult. There’s certainly enough space inside, at least.

Over the years we’ve seen some very impressive Portal builds, going all the way back to the infamous levitating portal gun [Caleb Kraft] built in 2012. Yes, we’ve even seen somebody do the radio before. At this point it’s probably safe to say that Valve can add “Create cultural touchstone” to their one-sheet.

## January 16, 2018

### digital audio hacks – Hackaday

#### Fooling Speech Recognition With Hidden Voice Commands

It’s 2018, and while true hoverboards still elude humanity, some future predictions have come true. It’s now possible to talk to computers, and most of the time they might even understand you. Speech recognition is usually achieved through the use of neural networks to process audio, in a way that some suggest mimics the operation of the human brain. However, as it turns out, they can be easily fooled.

The attack begins with an audio sample, generally of a simple spoken phrase, though music can also be used. The desired text that the computer should hear instead is then fed into an algorithm along with the audio sample. This function returns a low value when the output of the speech recognition system matches the desired attack phrase. The input audio file is gradually modified using the mathematics of gradient descent, creating a result that to a human sounds like one thing, and to a machine, something else entirely.

The audio files are available on the site for your own experimental purposes. In a noisy environment with poor audio coupling between speakers and a Google Pixel, results were poor – OK Google only heard the human phrase, not the encoded attack phrase. Given that the sound quality was poor, and the files were generated with a different speech model, this is not entirely surprising. We’d love to hear the results of your experiments in the comments.

It’s all a part of [Nicholas]’s PhD studies around the strengths and pitfalls of neural networks. It highlights the fact that neural networks don’t always work in the way we think they do. Google’s Inception is susceptible to similar attacks with images, as we’ve seen recently.

[Thanks to Wolfgang for the tip!]

## January 15, 2018

### MOD Devices Blog

#### Tutorial: LiquidCrystal and Control Chain

Dear MOD users!

Hi to all of you who want to use your Arduino Control Chain shield with an LCD screen.

We got the question on the forum some time ago, whether it would be possible to use LCDs on your Control Chain device and with the Control Chain library version 0.5.0 and up, it now is!

To help out, we thought that making an example of how to do this might come in handy. So let’s see!

## WHAT DO I NEED?

1. One Arduino Uno or Due
2. One Arduino Control Chain shield
3. Two 4×20 LCD Screens
4. Four 10K linear potentiometers
5. Some wire
6. Some soldering tin

## SCHEMATIC

Schematic for the UI extension build

The schematic for this build looks a little more complicated than the previous ones, but most of the connections are used for the displays.

Note that in the schematic there are two 16×2 displays used. The code, however, is meant for two 20×4 displays. Nonetheless, the pin-out is the same in both cases.

To keep this blog post from being extremely long, here is a link to a more detailed description of the wiring of the LCD screen’s made by Adafruit.

Because we are connecting 2 displays instead of one, we can connect almost all pins from one display to the other. For the contrast pin, however, it is still recommended to use 2 different potentiometers. The reason for this is that the 2 displays are almost never the same. This way it is still possible to set the contrast individually.

The big difference between this schematic and the one from the Adafruit tutorial is the ‘enable’ line of the displays (pin 6). In the Adafruit tutorial, this pin is connected to Arduino pin 8. In our case, however, we connect the ‘enable’ pin of the first display to Arduino pin 5 and the ‘enable’ pin of the second display to Arduino pin 6. Also, 4 potentiometers are connected. By default, they are connected to A0, A1, A2, and A3. But this is easily changeable, if desired.

## THE CODE

For ease of use, the LiquidCrystal library was used. This library makes writing to the LCDs a lot easier. There is, however, a downside to this approach. This library is rather slow and uses a lot of delays internally. Because of this, the example code will only work with the Control Chain library version 0.5.0 or up. As it is right now, this library is only compatible with the Arduino Uno. This is due to the UART interruption that by default cannot be used on the Arduino Due. A while ago a Pull request was opened to change this. If you really want to use an Arduino Due, scroll a little down on the Control Chain library Github page where you can find an explanation on how to do this.

The code is structured in such a way that it mostly depends on the Control Chain Callbacks. Whenever a value or assignment changes, the display will be updated as well. The values of the potentiometers are simply read by an analogRead() function.

## THE BUILD

1. Wire up the LCD screens according to the schematic (either by soldering them directly or as recommended with the use of pin headers)
2. Connect the potentiometers outer pins to +5V track and GND track of the CC shield
3. Solder the potentiometers inner pin to the corresponding analog input of the CC shield (by default A0, A1, A2, A3)

Follow the instructions on our Github page and install the dependencies then upload the code to your Arduino.

All done, time to test! Connect the CC shield to your MOD DUO. If everything went well you should see a new CC device popping up:

Success!

Also, when powered on, the device should display a message like this:

The startup message

The startup message

Then, assign the CC device to the actuator of your choice:

Address it like any actuator on the GUI

After assigning, the device should look like this:

The screens show the assigned parameters

All done! You should have yourself a working Hardware UI extender.

## THE FINAL PRODUCT

You can also put your build in an enclosure. For me, that meant looking up old prototypes and again I reused one of the XF4 prototype enclosures.

The 4 extra potentiometers controller with the 2 LCD screens looking good!

Now, you have just finished your own Control Chain Hardware UI extender. Hopefully, this is helpful for all of you who wanted to see how to connect a display to the Arduino Control Shield. Don’t hesitate to come and talk to us on the forum if you have any questions about Control Chain devices, the Arduino or anything else!

## January 09, 2018

### Audio, Linux and the combination

#### new elektro project 'BhBm' : Hydrogen + analogue synths

It has been a long, long time since i posted anything here !

Let me present you our newest elektro project BhBm (short for "Black hole in a Beautiful mind")

All drums and samples are done with H2.  Almost all bass and melody melodies are analogue synths controlled by H2 via MIDI.

Softsynths and FX are done using Carla and LV2 plugins.

I use H2 as a live sequencer in stacked pattern mode controlled by a BCR200 running, so there is no 'song', ony patterns that are enabled/disabled live > great fun !!

Check out our demo songs on Soundcloud :

Thijs

## January 07, 2018

### Linux Audio Conference 2018

#### Call for Papers starts!

We are happy to announce, that the call for papers and works has started!
All relevant information on the submission process can now be found on this website.
Additionally, we're very pleased to have another institution (Spektrum) onboard in support of this year's conference.

## January 01, 2018

### ardour

#### Ardour and Money, early 2018 edition

As 2018 gets underway, it seems like the right time for an updated report on the financial state of the Ardour project. I still occasionally see people referencing articles from several years that give a misleading idea on how things work these days, and it would be good to put some current and accurate information out there.

## December 29, 2017

### ArchAudio.org

#### Long awaited news

So, we hope you like the new look. This is a long-overdue update to the website to bring a consistent appearance in line with the Arch Linux style. There have been changes in the back-end as well, and that is actually what had held back the update from rolling out sooner.

There is a lot of room for improvement, but a one- or two-man show is not always enough. That is why we have created a separate forum for feedback and suggestions. For obvious issues or feature requests, please use the bugtracker.

Our long-term goal is to eventually have deep integration between the website and external databases, such as the VCS and package repositories. We have a standalone bugtracker in review currently, and we intend to replace our Flyspray installation with it. We also look forward to having an easier VCS contribution process (with regards to setting up permissions and the like).

On other news, there is work in progress currently to have an automated build system for a nightly repository of development builds. We will try to reflect all of our work in Subversion including that of
the website (under the “projects” directory) so that anyone may join in the fun.

That would be all for this announcement – til next time!

The post Long awaited news appeared first on ArchAudio.org.

#### Interview with JazzyEagle

After a time of silence here on ArchAudio.org, we’re back with lots of news about Arch and using Arch for audio purposes. Actually, we’re gonna do a few little features on users, developers and companies interested in Arch linux for audio projects. Up to date packages for a variety of popular audio programs are available in the ArchAudio repositories.

Interview #1, JazzyEagle
Today we talk to JazzyEagle, an Arch user and enthusiast packager who’s been intrested in Linux since the late 90’s.

ArchAudio: Tell us a bit about yourself?

JazzyEagle: I’m Jason Harrer (pronounced like Harper without the “p”) and I live in the US, around Denver, CO. I work in the HealthCare industry as a Manager of a Medical Claims processing unit. My hobbies are playing/writing music and programming computers.

I got interested in Linux way back in the late 90’s, but thanks to a modem with no Linux support at all, I couldn’t really do anything with Linux, so I gave it up. I tried it again probably about 4 – 5 years ago after someone showed me Ubuntu, and, despite accidentally reformatting the hard drive multiple times when the intent was dual boot, I’ve been running various *buntu distros until I found Arch and fell in love with it.

JazzyEagle: I listen to all genres, but the ones I like the most are in the grunge/hard rock zone. I do play guitar, bass, keyboards and even accordion. Although I don’t claim to be able to play other stringed instruments, I have been known to pick them up and fiddle with them enough to play what I needed to on rare occasion.

ArchAudio: Wow, that’s a lot of instruments! So where does Arch come into all this?

JazzyEagle: I like the ability to create the system to the way I want it without installing a bunch of stuff I don’t want, as well as the cutting/bleeding edge rolling release philosophy of Arch. I tried Parabola, but I’ve determined that there are some non-free things I like, and so it didn’t quite fit my needs.

Outro
That’s all for now, stay tuned for the next update: about the MOD LV2 guitar pedal, and how it utilizes Arch linux on the inside!

The post Interview with JazzyEagle appeared first on ArchAudio.org.

## December 26, 2017

### ardour

#### Almost Ready to Leave Home

Ardour began its life 18 years ago, in late December 1999. The story has been told many times in many different places, but the gist of it is that I wanted a program something like ProTools that would run on Linux, and none existed. I decided to write one. I had little idea what would be involved, of course. Which was probably for the best, otherwise I would likely not have started.

## December 21, 2017

### KXStudio News

#### JACK2 1.9.12 release and future plans

A few days ago a new version of JACK2 was released.
You can grab the latest release source code at https://github.com/jackaudio/jack2/releases.
The official changelog is:

• Fix Windows build issues
• Fix build with gcc 7
• Show hint when DBus device reservation fails
• Add support for internal session files

If you did not know already, I am now maintaining JACK2 (and also JACK1).
So this latest release was brought to you by yours truly. ;)

The release was actually already tagged on the git repo quite some time ago, but I was waiting to see if Windows builds were possible.
I got side-tracked with other things and 1.9.12 ended up not being released for some time, until someone reminded me of it again... :)
There are still no updated macOS or Windows builds, but I did not want to delay the release further because of it.
The 1.9.11 release (without RC label) was skipped to avoid confusion with the versions.
So 1.9.12 is the latest release as of today. macOS and Windows binaries still use an older 1.9.11 version.

Being the maintainer of both JACK1 and JACK2 means I can (more or less) decide the future of JACK.
I believe a lot of people are interested to know the current plan.

First, JACK1 is in bug-fix mode only.
I want to keep it as the go-to reference implementation of JACK, but not add any new features to it.
The reason for this is to try to get JACK1 and JACK2 to share as much code as possible.
Currently JACK2 includes its own copy of JACK headers, examples and utilities, while JACK1 uses sub-repositories.
During the course of next year (that is, 2018) I want to get JACK2 to slowly use the same stuff JACK1 does, then switch to use the same repositories as submodules like JACK1 does.
This will reduce the differences between the 2 implementations, and make it a lot easier to contribute to the examples and utilities provided by JACK.
(Not to mention the confusion caused by having utilities that work in simlar yet different ways)
We will keep JACK1 "frozen" until this is all done.

Second, but not least important, is to get the JACK1 specific features into JACK2.
A few things were added into JACK1 after JACk2 was created, that never made it into JACK2.
This includes meta-data (JACK2 does have the API, but a non-functional one) and the new internal clients.
The purpose is to reduce reasons users might have to switch/decide between JACK1 and JACK2.
JACK2 should have all features that JACK1 has, so that most users choose JACK2.

Now, you are probably getting the impression that the focus will be on JACK2, which is correct.
Though I realize some developers might prefer JACK1's design, the long "battle" of JACK1 and JACK2 needs to stop.
Development of new features will happen in the JACK2 codebase, and JACK1 will slowly become legacy.
Well, this is my personal plan at least.

Not sure if this all can be done in 2018, but better to take things slowly and get things done than do nothing at all.
I will keep you updated on the progress through-out the year.
Happy holidays everyone!

### Libre Music Production - Articles, Tutorials and News

#### LSP plugins version 1.1.0 released. Farewell to GTK!

Vladimir Sadovnikov has just released version 1.1.0 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

## December 18, 2017

### Linux Audio Conference 2018

#### LAC 2018 is happening in Berlin

Hey all, we have some good news!

Linux Audio Conference 2018 will be hosted at c-base - in partnership with the Electronic Studio at TU Berlin - again in 2018 and we even have a date for it already!

7th - 10th June 2018

We will have a Call for Papers and a Call for Submissions in the beginning of next year.

## December 11, 2017

### GStreamer News

#### GStreamer 1.12.4 stable release (binaries)

Pre-built binary images of the 1.12.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

## December 02, 2017

### OpenAV

#### 04: Post Sonoj and Winter Plans

Hey!

Its been a while since the last update – so whats new in OpenAV land? Well the Sonoj event took place, where the OpenAV Ctlra hardware access library was a demo! More details were shared about the intended goal of Ctlra library, and what obstacles we as a community need to overcome to enable everybody to have better hardware workflows!

#### Winter Plans

OK – Ctlra made some progress, but what is going to happen over the next few weeks / months? More Ctlra library progress is expected, everything from improving the sensitivity of drum pads to adding a 7-segment display widget to the virtual device user-interface.

So much for the easy part – the hard part is the mapping infrastructure for hardware and software – and OpenAV is looking at that problem, and prototyping various solutions at the moment. No promises – but this is currently the #1 problem causing hardware-based workflows to not integrate well for the majority of musicians in the Linux audio community….

Stay tuned!

## November 29, 2017

### Audio â€“ Stefan Westerfeld's blog

#### gst123-0.3.5 and playback rate adjustment

A new version of gst123, my command line media player – based on gstreamer – is available at http://space.twc.de/~stefan/gst123.php

Thanks to David Fries, this version supports playing media faster or slower compared to the original speed,  using { [ ] } as keyboard commands. This works, however, it also changes the pitch. So for instance speech sounds unnatural if the playback rate is changed.

I’ve played around with the youtube speed setting a bit, and they preserve pitch while changing playback speed, providing acceptable audio quality. There are open source solutions for doing this properly, we could get comparable results if we used librubberband (GPL) to correct the pitch in the pipeline after the actual decoding. However, there is no librubberband gstreamer plugin as far as I know.

Also there is playitslowly does the job with existing gstreamer plugins, but I think the sound quality is not as good as what librubberband would do.

I think ideally, playback pitch correction should not be done in gst123 itself (as other players may want to use the feature). So if anybody feels like working on this, I think it would be a nice project to hack on. Feel free to propose patches to gst123 for pitch correct playback rate adjustments, I would be happy to integrate it, but maybe it should just go into the playbin (maybe optional, as in 1. set playback rate, 2. enable pitch correction), so the code could live in gstreamer.

## October 27, 2017

### Thorwil's

#### Giraffe, Tortoise? Girtoise!

Two Girtoises about to feast on cloud-rooted Bananeries on the plains of the seastern continent. These animals are also known as Toraffes or by their scientific name: Giradinoides. In German, they have the even better name Schiraffen. The Bananeries contain valuable vitamins and minerals which help the animals in maintaining smooth fur and strong shells.

Detail at full resolution:

### Technical notes

This is a completely tablet-drawn work. With my trusty serial Wacom Intuos, still working as I keep compiling the module after every kernel update. Originally, I wanted to use Krita for the nice paintbrush engine and the canvas rotation. I found the later to be critical in achieving the smoothest curves, which is a lot easier in a horizontal direction. With what ended up being a 10000 x 10200 resolution and only 4 GiB RAM, I ran into performance problems. Where Krita failed, GIMP still worked, though I had to switch to the development version to have canvas rotation. At the end, GIMP’s PNG export failed due to it not being able to fork a process with no memory left! Flattening the few layers to save memory led to GIMP being killed. Luckily, there’s the package xcftools with xcf2png, so I could get my final PNGs via command line!

Filed under: Illustration, Planet Ubuntu Tagged: Apparel, GIMP, Krita, T-shirt, xcftools

## October 19, 2017

### News â€“ Ubuntu Studio

#### Ubuntu Studio 17.10 Released

We are happy to announce the release of our latest version, Ubuntu Studio 17.10 Artful Aardvark! As a regular version, it will be supported for 9 months. Since itâ€™s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list [â€Ś]

## October 07, 2017

### OpenAV

#### 03: OpenAV @ Sonoj

Hey folks!

Some of you are probably aware of the Sonoj Convention, well OpenAV is going to be talking about hardware and software there – demonstrating the latest progress in integrating hardware controllers with audio software! Are you in the Cologne area the 4th or 5th of November? You should attend tooÂ  : )

Interested in details? We’re gonna talk about what ~2000 lines of code means to the user of Ctlra enabled software, and how 13 lines of code make that useful to a user! It enables integration of hardware in novel ways… even if you don’t have access to the hardware!

Looking forward to seeing you all at Sonoj! -Harry of OpenAV

## October 04, 2017

### aubio

#### 0.4.6 released

A new version of aubio, 0.4.6, is now available.

This version includes:

• yinfast, a new version of the YIN pitch detection algorithm, that uses spectral convolution to compute the same results as the original yin, but with a cost O(N log(N)), making it much faster than the plain implementation (O(N^2))

• Intel IPP optimisations (thanks to Eduard Mueller), available for Linux, MacOS, Windows, and Android

• improved support for emscripten (thanks to Martin Hermant), which compiles the aubio library as a javascript module and lets you run aubio's algorithm directly from within a web-page.

0.4.6 also comes with several bug fixes and improvements.

Many thanks to Eduard Mueller (@emuell), Martin Hermant (@MartinHN), Hannes Fritz (@hztirf), Stuart Axon (@stuaxo), Jörg (@7heW4yne), ssj71 (@ssj71), Andreas Borg (@borg), Rob (@mlrobsmt) and everyone else for their valuable contributions and input.

#### Analyzing songs online

When built with ffmpeg or libav, aubio can read most existing audio and video formats, including compressed and remote video streams. This feature lets you analyze directly audio streams from the web.

A powerful tool to do this is youtube-dl, a python program which downloads video and audio streams to your hard-drive. youtube-dl works not only from youtube, but also from a large number of sites.

Here is a quick tutorial to use aubio along with youtube-dl.

#### Suil 0.10.0

Changes:

• Add support for X11 in Gtk3
• Add support for Qt5 in Gtk2
• Add suil_init() to support early initialization and passing any necessary information that may be needed in the future (thanks Stefan Westerfeld)
• Fix minor memory errors
• Fix building with X11 against custom LV2 install path (thanks Robin Gareus)

## September 29, 2017

### Audio â€“ Stefan Westerfeld's blog

#### SpectMorph 0.3.4 released

A new version of SpectMorph, my audio morphing software is now available on www.spectmorph.org.

The biggest addition is an ADSR-Envelope which is optional, but when enabled allows overriding the natural instruments attack and volume envelope (full list of changes).

I also created a screencast of SpectMorph which gives a quick overview of the possibilities.

## September 15, 2017

### Internet Archive - Collection: osmpodcast

#### OSMP Episode 85 - Louis, Author of the Digits Synthesizer

The Open Source Musician - connecting musicians and producers to open tools and ideas Episode 85 - Louis, Author of the Digits Synthesizer Themesong by: Møllpauken News: http://community.ardour.org/node/14325 - ardour 5.8-5.11 mostly bug fixes, Count-in functionality accessible via Transport menu, ....

This item belongs to: audio/osmpodcast.

This item has files of the following types: Archive BitTorrent, Columbia Peaks, Flac, Metadata, Ogg Vorbis, PNG, Spectrogram, VBR MP3

## September 07, 2017

### News â€“ Ubuntu Studio

#### 17.10 Beta 1 Release

Ubuntu Studio 17.10 Artful Aardvark Beta 1 is released! Itâ€™s that time of the release cycle again. The first beta of the upcoming release of Ubuntu Studio 17.10 is here and ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/artful/beta-1/. More information can be found in the Beta 1 Release Notes. Reporting Bugs If [â€Ś]

## September 02, 2017

### fundamental code

#### Total Variation Denoising

Working with data is an important part of my day-to-day work. No matter if it’s speech, music, images, brain waves, or some other stream of data there’s plenty of it and there’s always some quality issue associated with working with the data. In this post I’m interested in providing an introduction to one technique which can be utilized to reduce the amount of noise present in some of these classes of signals.

Noise might seem abstract at first, but it’s relatively simple to quantify it. If the original signal, $x$, is known, then the noise, $n$, is any deviation in the observation, $y$, from the original signal.

$$y = x + n$$

Typically the deviation is measured via the squared error across all elements in a given signal:

$$\text{error} = ||x-y||^2_2 = \sum_i (x_i-y_i)^2$$

When only the noisy signal, $y$, is observed it is difficult to separate the noise from the signal. There is a wealth of literature on separating noise and many algorithms focus on identifying underlying repeating structures. The algorithm that this post focuses on is one which reduces the total variation over a given signal. One example of a signal with little variation is a step function:

A step function only has one point where a sample of the signal varies from the previous sample. The Total Variation denoising technique focuses on minimizing the number of points where the signal varies and the amount the signal varies at each point. Restricting signal variation works as an effective denoiser as many types of noise (e.g. white noise) contain much more variation than the underlying signal. At a high level Total Variation (TV) denoising works by minimizing the cost of the output $y$ given input signal $x$ as described below:

$$\text{cost} = \text{error}(x, y) + \text{weight}*\text{sparseness}(\text{transform}(y))$$

Mathematically the full cost of TV denoising is:

\begin{aligned} \text{cost} &= \text{error} + \text{TV-cost} \\ \text{cost} &= ||x-y||_2^2 + \lambda ||y||_{TV} \\ ||y||_{TV} &= \sum |y_i-y_{i-1}| \end{aligned}

To see how the above optimization can recover a noisy signal, lets look at a noisy version of the step function:

After using the TV norm to denoise only a few points of variation are left:

The process of getting the final TV denoised output involves many iterations of updating where variations occur. Over the course of iterations opposing variations cancel out and smaller variations are driven to $\Delta y = 0$. As the number of non-zero points increase a sparse solution is produced and noise is eliminated. For higher values of the TV weight, $\lambda$, the solution will be more sparse. For the noisy step function, $y$ and $\Delta y$ over several iterations look like:

For piecewise constant signals, the TV norm alone works quite well, however there are problems which arise with the output when the original signal is not a series of flat steps. To illustrate this consider a piecewise linear signal. When TV denoising is applied a stair stepping effect is created as shown below:

One of the extensions to TV based denoising is to add 'group sparsity' to the cost of variation. Standard TV denoising results in a sparse set of points where there is non-zero variation, resulting in a few piecewise constant regions. With the TV norm, the cost of varying at point $\Delta y_i$ within the signal does not depend upon which other, $\Delta y_j,\Delta y_k,\text{etc}$, points vary. Group Sparse Total Variation, GSTV, on the other hand reduces the cost for smaller variation in nearby points. GSTV therefore generally produces smoother results with more gentle curves for higher order group sparsity values as variation occurs over several nearby points rather than a singular one. Applying GSTV to the previous example results in a much smoother representation which more accurately models the underlying data.

Now that some artificial examples have been investigated, lets take a brief look at some real world data. One example of data which is expected to have relatively few points of abrupt change is the price of goods. In this case we’re looking at the price of corn in the United States 2000 to 2017 in USD per bushel as retrieved from http://www.farmdoc.illinois.edu/manage/uspricehistory/USPrice.asp . With real data it’s harder to define noise (or what part of the signal is unwanted); However, by using higher levels of denoising the overall trends can be observed within the time-series data:

If this short into was interesting I’d recommend trying out TV/GSTV techniques on your own problems. For more in depth information there’s a good few papers out there on the topic with the original GSTV work being:

• I. W. Selesnick and P.-Y. Chen, 'Total Variation Denoising with Overlapping Group Sparsity', IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP). May, 2013.

• http://eeweb.poly.edu/iselesni/gstv/ - contains above paper as well as a MATLAB implementation

And if you’re using Julia, feel free to grab my re-implementation of Total Variation and Group Sparse Total Variation at https://github.com/fundamental/TotalVariation.jl

## July 26, 2017

### linux-audio « WordPress.com Tag Feed

#### Wharf Rd home sells for $5.25m {“contentType”:”NEWS_STORY”,”id”:{“value”:”6d0 ## July 04, 2017 ### fundamental code #### Linux & Multi-Screen Touch Screen Setups While working on the Zyn-Fusion UI I ended up getting a touch screen to help with the testing process. After getting the screen, buying several incorrect HDMI cables, and setting up the screen I found out that the touch events weren’t working as expected. In fact they were often showing up on the wrong screen. If I disabled my primary monitor and only used the touch screen, then events were spot on, so this was only a multi-monitor setup issue. So, what caused the problem and how can it be fixed? Well, by default the mouse/touch events which were emitted by the new screen were scaled to the total available area treating multiple screens as a single larger screen. Fortunately X11 provides one solution through xinput. Just running the xinput tool lists out a collection of devices which provide mouse and keyboard events to X11. mark@cvar:~$ xinput
| Virtual core pointer                          id=2    [master pointer  (3)]
|   > Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
|   > PixArt USB Optical Mouse                  id=8    [slave  pointer  (2)]
|   > ILITEK Multi-Touch-V3004                  id=11   [slave  pointer  (2)]
| Virtual core keyboard                         id=3    [master keyboard (2)]
> Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
> Power Button                              id=6    [slave  keyboard (3)]
> Power Button                              id=7    [slave  keyboard (3)]
> AT Translated Set 2 keyboard              id=9    [slave  keyboard (3)]
> Speakup                                   id=10   [slave  keyboard (3)]

In this case the monitor is device 11 which has it’s own set of properties.

mark@cvar:~xinput list-props 11 Device 'ILITEK Multi-Touch-V3004': Device Enabled (152): 1 Coordinate Transformation Matrix (154): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000 Device Accel Profile (282): 0 Device Accel Constant Deceleration (283): 1.000000 Device Accel Adaptive Deceleration (284): 1.000000 Device Accel Velocity Scaling (285): 10.000000 Device Product ID (272): 8746, 136 Device Node (273): "/dev/input/event13" Evdev Axis Inversion (286): 0, 0 Evdev Axis Calibration (287): <no items> Evdev Axes Swap (288): 0 Axis Labels (289): "Abs MT Position X" (689), "Abs MT Position Y" (690), "None" (0), "None" (0) Button Labels (290): "Button Unknown" (275), "Button Unknown" (275), "Button Unknown" (275), "Button Wheel Up" (158), "Button Wheel Down" (159) Evdev Scrolling Distance (291): 0, 0, 0 Evdev Middle Button Emulation (292): 0 Evdev Middle Button Timeout (293): 50 Evdev Third Button Emulation (294): 0 Evdev Third Button Emulation Timeout (295): 1000 Evdev Third Button Emulation Button (296): 3 Evdev Third Button Emulation Threshold (297): 20 Evdev Wheel Emulation (298): 0 Evdev Wheel Emulation Axes (299): 0, 0, 4, 5 Evdev Wheel Emulation Inertia (300): 10 Evdev Wheel Emulation Timeout (301): 200 Evdev Wheel Emulation Button (302): 4 Evdev Drag Lock Buttons (303): 0 Notably xinput provides a property to describe a coordinate transformation which can be used to remap the x and y values of the cursor events. The transformation matrix here is a 3x3 matrix used to transform 2D coordinates and is a fairly common sight in computer graphics. It translates from $$(x,y)$$ to $$(x',y')$$ as defined by: $$\begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ h & i & j \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}$$ The transformation matrix allows for stretching, shearing, translation, flipping, scaling, etc. For the sorts of problems you may see introduced by a multi-monitor setup I would only expect people to care about translating ($$t$$) the events and then re-scaling ($$s$$) them to the offset area. Using these two parameters, the transformation matrix equation is simplified to: $$\begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s_x & 0 & t_x\\ 0 & s_y & s_y\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix}$$ Or without the matrix representation: \begin{aligned} x' &= s_x x + t_x\\ y' &= s_y y + t_y \end{aligned} With that background out of the way, let’s see how this applied to my specific monitor setup: As I mentioned earlier the touch events were scaled to the dimensions of the larger virtual screen. Since the touch screen is larger this means the y axis is mapped correctly and the x axis is mapped for pixels 0..3200 (both screens) instead of pixels 1281..3200 (left screen only). Since the xinput scales theses parameters based upon the total screen size, we can divide by the total x size (3200) to learn that the x axis maps to 0..1 rather than 0.4..1.0. Solving the above equations we can remap the touch events using $$s_x=0.6$$ and $$t_x=0.4$$. This results in the transformation matrix: $$\begin{bmatrix} 0.6 & 0 & 0.4\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix}$$ The last step is to provide the new transformation matrix to xinput: xinput set-prop 11 'Coordinate Transformation Matrix' 0.6 0 0.4 0 1 0 0 0 1 Now cursor events map onto the correct screen accurately and the code to change the xinput properties can be easily put into a shell script. ## June 27, 2017 ### autostatic.com #### RPi 3 and the real time kernel As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post. So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi because I’d like to use a real time kernel to get sub 5ms system latency. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues: Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux And the RPi seems to run stable with acceptable latencies: So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though: cyclictest -Sm -p 80 -n -i 500 -l 300000 And I ran hackbench in the background to create some load on the RPi: (while true; do hackbench > /dev/null; done) & Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3. First you’ll need to git clone the Raspberry Pi kernel repository: git clone -b 'rpi-4.9.y' --depth 1 https://github.com/raspberrypi/linux.git This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi: git clone https://github.com/raspberrypi/tools.git This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up: export KERNEL=kernel7 export ARCH=arm export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf- export CONCURRENCY_LEVEL=(nproc)

The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make).

Now that the environment variables are set we can create the initial kernel config:

cd linux
make bcm2709_defconfig

This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it:

cd ..
wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.33-rt23.patch.xz
cd linux
xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1

Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it:

cd ..
cd linux
patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-run

You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch:

patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1

And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. You will need the libncurses5-dev package to run this commando so install that with apt-get install libncurses5-dev. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution.

With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages:

make clean
scripts/config --disable DEBUG_INFO
make -j$(nproc) deb-pkg Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it: kernel=vmlinuz-4.9.33-rt23-v7+ Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a: Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine. The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK. Edit: there’s a thread on LinuxMusicians referring to this article which already contains some very useful additional information. The post RPi 3 and the real time kernel appeared first on autostatic.com. ## April 30, 2017 ### m3ga blog #### What do you mean ExceptT doesn't Compose? Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata. At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus: Using exceptions for control flow is the root of many evils in software.ďťż Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded: Yet what to do if you want composable code? Currently I have type Rpc a = ExceptT RpcError IO a which is terrible But what do we mean by "composable"? I like the wikipedia definition: ďťżComposability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements. The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code. At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text. In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together. So lets look at the naive version of a program that doesn't do any exception handling at all.  import Data.ByteString.Char8 (readFile, writeFile) import Naive.Cat (Cat, parseCat) import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection) import Naive.Dog (Dog, parseDog) import Prelude hiding (readFile, writeFile) import System.Environment (getArgs) import System.Exit (exitFailure) main :: IO () main = do args <- getArgs case args of [inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile _ -> putStrLn "Expected three file names." >> exitFailure readCatFile :: FilePath -> IO Cat readCatFile fpath = do putStrLn "Reading Cat file." parseCat <$> readFile fpath

readDogFile :: FilePath -> IO Dog
parseDog <$> readFile fpath writeResultFile :: FilePath -> Result -> IO () writeResultFile fpath result = do putStrLn "Writing Result file." writeFile fpath$ renderResult result

processFiles :: FilePath -> FilePath -> FilePath -> IO ()
processFiles infile1 infile2 outfile = do
result <- withDatabaseConnection $\ db -> processWithDb db cat dog writeResultFile outfile result  Once built as per the instructions in the repo, it can be run with:  dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null Reading Cat file 'Naive/Cat.hs' Reading Dog file 'Naive/Dog.hs'. Writing Result file '/dev/null'.  The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown: • Either of the two readFile calls. • The writeFile call. • The parsing functions parseCat and parseDog. • Opening the database connection. • The database connection could terminate during the processing stage. So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things. Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:  {-# LANGUAGE OverloadedStrings #-} import Control.Exception (SomeException) import Control.Monad.IO.Class (liftIO) import Control.Error (ExceptT, fmapL, fmapLT, handleExceptT , hoistEither, runExceptT) import Data.ByteString.Char8 (readFile, writeFile) import Data.Monoid ((<>)) import Data.Text (Text) import qualified Data.Text as T import qualified Data.Text.IO as T import Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError) import Improved.Db (DbError, Result, processWithDb, renderDbError , renderResult, withDatabaseConnection) import Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError) import Prelude hiding (readFile, writeFile) import System.Environment (getArgs) import System.Exit (exitFailure) data ProcessError = ECat CatParseError | EDog DogParseError | EReadFile FilePath Text | EWriteFile FilePath Text | EDb DbError main :: IO () main = do args <- getArgs case args of [inFile1, infile2, outFile] -> report =<< runExceptT (processFiles inFile1 infile2 outFile) _ -> do putStrLn "Expected three file names, the first two are input, the last output." exitFailure report :: Either ProcessError () -> IO () report (Right _) = pure () report (Left e) = T.putStrLn$ renderProcessError e

renderProcessError :: ProcessError -> Text
renderProcessError pe =
case pe of
ECat ec -> renderCatParseError ec
EDog ed -> renderDogParseError ed
EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg
EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg
EDb dbe -> renderDbError dbe

readCatFile :: FilePath -> ExceptT ProcessError IO Cat
liftIO $putStrLn "Reading Cat file." bs <- handleExceptT handler$ readFile fpath
hoistEither . fmapL ECat $parseCat bs where handler :: SomeException -> ProcessError handler e = EReadFile fpath (T.pack$ show e)

readDogFile :: FilePath -> ExceptT ProcessError IO Dog
liftIO $putStrLn "Reading Dog file." bs <- handleExceptT handler$ readFile fpath
hoistEither . fmapL EDog $parseDog bs where handler :: SomeException -> ProcessError handler e = EReadFile fpath (T.pack$ show e)

writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO ()
writeResultFile fpath result = do
liftIO $putStrLn "Writing Result file." handleExceptT handler . writeFile fpath$ renderResult result
where
handler :: SomeException -> ProcessError
handler e = EWriteFile fpath (T.pack $show e) processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO () processFiles infile1 infile2 outfile = do cat <- readCatFile infile1 dog <- readDogFile infile2 result <- fmapLT EDb . withDatabaseConnection$ \ db ->
processWithDb db cat dog
writeResultFile outfile result



The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function.

We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:

  EDb (DbError3 ResultError1)



contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing.

We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose.

Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including:

• Errors are explicit in the types of the functions, making the code easier to reason about.
• Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions.
• The programmer spends less time chasing the source of exceptions in large complex code bases.
• More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought.

Want to discuss this? Try reddit.

# Combining text and music

If you want to create a document with lots of text and some small musical snippets, e.g. an exercise sheet or a musical analysis, what software can you use?

Of course it’s possible to do the entire project in LilyPond or another notation program, inserting passages of text between multiple scores – in LilyPond by combining \markup and \score sections:

\markup { "A first motif:" }
\score \relative c' { c4 d e f  g2 g }
\markup { "A second motif:" }
\score \relative c'' { a4 a a a  g1 }

However, it is clear that notation programs are not originally designed for that task, so many people prefer WYSIWYG word processors like LibreOffice Writer or Microsoft Word that instantly show what the final document will look like. In these text documents music fragments can be inserted as image files that can for example be generated with LilyPond from .ly input files. Of course these images are then static, and to be able to modify the music examples one has to manage the additional files with some care. That’s when things might get a little more complicated…

Wouldn’t it be a killer feature to be able to edit the scores directly from within the word processor document, without having to keep track of and worry about additional files? Well, you may be surprised to learn that this has already been possible for quite some time, and I take the relaunch of OOoLilyPond as an opportunity to show it to you.

## What is OOoLilyPond?

OOoLilyPond (OLy) is a LibreOffice extension that allows to insert snippets of LilyPond code into LibreOffice Writer, Draw and Impress documents and to transparently handle the rendering through LilyPond. So you can write LilyPond code, have that rendered as a score and be able to modify it again.

OOoLilyPond was originally written by Samuel Hartmann and had its first launch in 2006 (hence the name, as the only open-source successor of StarOffice was OpenOffice.org).
Samuel continued the development until 2009 when a stable version 0.4.0 with new features was released. In the following years, OLy was ocasionally mentioned in LilyPond’s user forums, so there might be several people who use it periodically – including myself. Being a music teacher, I work with it everyday. Well, almost…

In 2014 LilyPond had the new 2.19 release which showed a different behaviour when invoked by the command used in OLy. This lead to a somewhat mysterious error message, and the macro execution was aborted. Therefore it was impossible to use OLy with LilyPond’s new development versions. Of course, I googled that problem, but there was no answer.

Someday I wanted to get to the bottom of it. I’m one of those guys who have to unscrew anything they get their hands on. OLy is open source and published under GPL, so why hesitate? After some studying the code, I finally found that the problem was surprisingly small and easy to fix. I posted my solution on the LilyPond mailing list and also began to experiment with new features.

Urs Liska and Joram Berger had already contacted Samuel in the past. They knew that he did not have the time to further work on OOoLilyPond, but he would be glad if someone else could take over the development of the project.

Urs and Joram also contributed lots of work, knowledge and ideas, so that we were finally able to publish a new release that can be adapted to the slightly different characteristics of LibreOffice and OpenOffice, that can be translated into other languages, that can make use of vector graphics etc. This new take on the project now has its home within openLilyLib.

# How to get and install it

The newest release will always be found at github.com/openlilylib/LO-ly/releases where the project is maintained. Look for an *.oxt file with a name similar to OOoLilyPond-0.X.X.oxt and download it:

For anyone who doesn’t want to read the release notes, there’s a simple Download page as well.

In LibreOffice, open the extension manager (Tools -> Extension Manager), click the “Add” button which will open a file dialog. Select the *.oxt file you’ve just downloaded and confirm with the “Open” button.

When asked for whom you want to install the extension, you can choose “only for me” which won’t require administrator privileges on your system. After successful installing, close the extension manager, and probably you will be requested to restart LibreOffice.

Now LibreOffice will have a new “OLy” toolbar. It contains a single “OLy” button that launches the extension.

# Launching for the first time

Here we go: Create a new Writer document and click the OLy button. (Don’t worry if you get some error messages telling you that LilyPond could not be executed. Just click “OK” to close the message boxes. We’ll fix that in a moment.)

Now you should see the OOoLilyPond Editor window.

First, let’s open the configuration dialog by clicking the “Config” button at the bottom:

A new window will pop up:

Of course, you need to have LilyPond installed on your system. In the “LilyPond Executable” field, you need to specify the executable file for LilyPond. On startup, OLy has tried to guess its correct (default) location. If that didn’t work, you already got some error messages.Â

For a Windows system, you need to know the program folder (probably C:\Program Files (x86)\LilyPond on 64-bit Windows or C:\Program Files\LilyPond on 32-bit Windows systems).
In the subfolder \usr\bin\ you will find the executable file lilypond.exe.

If you are working with Linux, relax and smile. Usually, you simply need to specify lilypond as command, without any path settings. As far as I know, that also applies for the Mac OS family which is based on Unix as well.

On the left side, there are two frames titled “Insert Images”. Depending on the Office version you are using (OpenOffice or LibreOffice), you should click the appropriate options.

For the moment, all remaining settings can be left at their default values. In case you’ve messed up anything, there’s also a “Reset to Default” button.

At the right bottom, click “OK” to apply the settings and close the dialog. Now you are back in the main Editor window. It contains some sample code, so just click the “LilyPond” button at the bottom right.

In the background, LilyPond is now translating the code into a *.png graphic file which will be inserted into Writer. The code itself is invisibly saved inside the document.

After a few seconds, the editor window should disappear, and a newly created image should show up.

# How to work with it

If you want to modify an existing OLy object, click on it to select it in Writer. Then, hit the “OLy” button.

The Editor window will show the code as it has been entered before. Here you can modify it, e.g. change some pitches (there’s also no need to keep the comments) and click the “LilyPond” button again. OLy will generate a new image and replace the old one.

To insert a new OLy object, just make sure that no existing object is selected when hitting the “OLy” button.

# Templates

In the Editor window, you might have noticed that you were not presented an entire LilyPond file, but only an excerpt of it. This is because OLy always works with a template. It allows you to quickly enter short snippets while not having to care about any other settings for layout etc.

The snippet you just created is based on the template Default.ly which looks (more or less) like this:

\transpose %{OOoLilyPondCustom1%}c c'%{OOoLilyPondEnd%}
{
%{OOoLilyPondCode%}
\key e \major
e8 fis gis e fis8 b,4. |
e2\fermata \bar "|."
%{OOoLilyPondEnd%}
}

\include "lilypond-book-preamble.ly"
#(set-global-staff-size %{OOoLilyPondStaffSize%}20%{OOoLilyPondEnd%})

\paper {
#(define dump-extents #t)
ragged-right = ##t
line-width = %{OOoLilyPondLineWidth%}17\cm%{OOoLilyPondEnd%}
}

\layout {
indent = #0
\context {
\Score
\remove "Bar_number_engraver"
}
}


In the Editor window, there are five text fields: the big “Code” area on top, and four additional small fields named “Line Width”, “Staff Size”, “Custom 1” and “Custom 2”. They contain the template parts that are enclosed by tags, i.e. preceeded by %{OOoLilyPondCode%}, %{OOoLilyPondLineWidth%}, %{OOoLilyPondStaffSize%}, %{OOoLilyPondCustom1%} and %{OOoLilyPondCustom2%} respectively, each terminated by %{OOoLilyPondEnd%}. (Those tags themselves are ignored by LilyPond because they are comments.)

All remaining parts of the template stay “invisible” to the user and cannot be changed. Don’t worry, you can modify existing templates and create your own.

A template must at least have a Code section, other sections are optional. There is a template Direct to LilyPond which only consists of a Code section and contains no “invisible” parts at all. You can use it to paste ordinary *.ly files into your document. But please keep in mind that the resulting graphic should be smaller than your paper size.

Most templates (the ones without [SVG] inside the file name) make use of \include "lilypond-book-preamble.ly which results in a cropped image. Any whitespace around the music is automatically removed.

Below the code view, there is a dropdown field that lets you choose which template to use. Of course, different templates have different default code in their Code sections.

When switching the template, the code field always will update to the corresponding default code as long as you haven’t made any edits yet. However, this will not happen automatically if you already made any changes. To have your current code replaced anyway, click the “Default Code” checkbox.

The “Edit” button will open a new dialog where you can edit the current template. Optionally, you can save it under a new file name.

# Easier editing

Probably you are used to a particular text editor when working on LilyPond files. Of course you can use it for OLy templates as well. The path to the template files can be found (and changed) in the configuration dialog. Here you can also specify where your text editor’s executable file is located. You can use any text editor like Mousepad, Notepad etc., but if you don’t yet know Frescobaldi, you really should give it a try.

Back in the main OLy window, another button might be useful: “Open as temp. file in Ext. Editor”. It saves the entire snippet into a *.ly file – not only the contents of the “Code” field, but including the other fields and the “invisible” parts between them. This file is opened in the external editor you’ve specified before. If you use an IDE like Frescobaldi, you can instantly preview your changes.

As soon as editing is finished, save your changes (without changing the file name). You can now close your external editor.

Back in OLy, hit the “Import from temp. file” button to load the updated file back into OLy. In the text fields you will recognize the changes you have applied. Hit the “LilyPond” button to insert the graphic into your document.

A word of caution: Only changes to the Code, Line Width, Staff Size, Custom 1 and Custom 2 fields are recognized. Changes to the “invisible” parts of the template are ignored! If you intend to modify those sections as well, you need to create a new template.

A very last word of caution: If you use a template that is modified or created by yourself, and you share your Office document with other collaborators, you have to share your template as well.

## To be continued…

OLy can be configured for using vector graphic formats (*.svg or *.eps) instead of *.png. They offer better quality, especially for printing. However, some additional things will have to be considered. This will be covered in a follow-up post: Part 2 – Optimizing.

## March 20, 2017

### Backstory

In 2016, RC-car company Arrma released the Outcast, calling it a stunt truck. That label lead to some joking around in the UltimateRC forum. One member had trouble getting his Outcast to stunt. Utrak said “The stunt car didn’t stunt do hobby to it, it’ll stunt “. frystomer went: “If it still doesn’t stunt, hobby harder.” and finally stewwdog was like: “I now want a shirt that reads ‘Hobby harder, it’ll stunt’.” He wasn’t alone, so I created a first, very rough sketch.

### Process

After a positive response, I decided to make it look like more of a stunt in another sketch:

Meanwhile, talk went to onesies and related practical considerations. Pink was also mentioned, thus I suddenly found myself confronted with a mental image that I just had to get out:

To find the right alignment and perspective, I created a Blender scene with just the text and boxes and cylinders to represent the car. The result served as template for drawing the actual image in Krita, using my trusty Wacom Intuos tablet.

### Result

Filed under: Illustration, Planet Ubuntu Tagged: Apparel, Blender, Krita, RC, T-shirt

## March 05, 2017

### autostatic.com

#### Moved to Fuga

Moving my VPS from VMware to Fuga was successful. First I copied the VMDK from the ESXi host to a Fuga instance with enough storage:

scp some.esxi.host:/vmfs/volumes/storage-node/autostatic1.autostatic.cyso.net/autostatic1.autostatic.cyso.net-flat.vmdk ./

And then converted it to QCOW2 with qemu-img:

qemu-img convert -O qcow2 autostatic1.autostatic.cyso.net-flat.vmdk autostatic1.autostatic.cyso.net.qcow2

Next step was mounting it with guestmount:

guestmount -a /var/www/html/images/autostatic1.autostatic.cyso.net.qcow2 -m /dev/sda8 /mnt/tmp/

And changing some settings, i.e. network and resolvconf. When that was done I unmounted the image:

guestunmount /mnt/tmp

And uploaded it to my Fuga tenant:

openstack image create --disk-format qcow2 --container-format bare --file /path/to/images/autostatic1.autostatic.cyso.net.qcow2 --private autostatic1.autostatic.cyso.net.qcow2

Last step was launching an OpenStack image from this image, I used Ansible for this:

- name: Launch OpenStack instance
hosts: localhost
connection: local
gather_facts: no
vars:
os_flavor: c1.large
os_network: int1
os_image: 5b878fee-7071-4e9c-9d1b-f7b129ba0644
os_hostname: autostatic1.autostatic.cyso.net
os_portname: int-port200
os_fixed_ip: 10.10.10.200
os_floating_ip: 185.54.112.200

- name: Create port
os_port:
network: "{{ os_network }}"
fixed_ips:
name: "{{ os_portname }}"

- name: Launch instance
os_server:
state: present
name: "{{ os_hostname }}"
timeout: 200
flavor: "{{ os_flavor }}"
nics:
- port-name: "{{ os_portname }}"
security_groups: "{{ os_hostname }}"
floating_ips: "{{ os_floating_ip }}"
image: "{{ os_image }}"
meta:
hostname: "{{ os_hostname }}"

And a few minutes later I had a working VPS again. While converting and uploading I made the necessary DNS changes and by the time my VPS was running happily on Fuga all DNS entries pointed to the new IP address.

The post Moved to Fuga appeared first on autostatic.com.

## February 27, 2017

### Internet Archive - Collection: osmpodcast

#### OSMP Episode 84 - Tunestorm 17 Reveal!

The Open Source Musician - connecting musicians and producers to open tools and ideas Episode 84 - Tunestorm 17 Reveal Themesong by: guitarman (Contact info at end) News: http://lsp-plug.in/?page=home - 1.0.20 - added impulse response, improved analyzer UI, new limiter modes http://users.notam02.no/....

This item belongs to: audio/osmpodcast.

This item has files of the following types: Archive BitTorrent, Columbia Peaks, Flac, Metadata, Ogg Vorbis, PNG, Spectrogram, VBR MP3

## January 07, 2017

### The Penguin Producer

#### Composition in Storytelling

During the “Blender for the 80s” series, I went into some of the basics of visual composition.  In and of itself, it does well enough to give one a basic glimpse, but it’s really important to understand composition in and of itself. Composition is a key element to any visual …

## January 04, 2017

#### Jalv 1.6.0

jalv 1.6.0 has been released. Jalv is a simple but fully featured LV2 host for Jack which exposes plugin ports to Jack, essentially making any LV2 plugin function as a Jack application. For more information, see http://drobilla.net/software/jalv.

Changes:

• Support CV ports if Jack metadata is enabled (patch from Hanspeter Portner)
• Fix unreliable UI state initialization (patch from Hanspeter Portner)
• Fix memory error on preset save resulting in odd bundle names
• Improve preset support
• Support numeric and string plugin properties (event-based control)
• Update UI when internal plugin state is changed during preset load
• Add PortAudio backend (compile time option, audio only)
• Set Jack port order metadata
• Allow Jack client name to be set from command line (thanks Adam Avramov)
• Add command prompt to console version for changing controls
• Add option to print plugin trace messages
• Print colorful log if output is a terminal
• Exit on Jack shutdown (patch from Robin Gareus)
• Report Jack latency (patch from Robin Gareus)
• Exit GUI versions on interrupt
• Fix semaphore correctness issues
• Use moc-qt4 if present for systems with multiple Qt versions

## December 31, 2016

### The Penguin Producer

#### Blender for the 80s: Outlined Silhouettes

Having a landscape is nice and all, but what’s the point if there isn’t anything on the landscape?  In this article, we will populate the landscape with black objects containing bright neon silhouettes.   For this tutorial, we’ll place some silhouettes in our composition.  I will assume you’ve read the …

## November 29, 2016

### PipeManMusic

#### Stay In Bed For Christmas

So I've recorded a little Christmas tune for those who are over the hype. I hope you like it. Check it out, share it, buy it, I'd really appreciate it.

## June 23, 2016

### Nothing Special

#### Room Treatment and Open Source Room Evaluation

Its hard to improve something you can't measure.

My studio space is much much too reverberant. This is not surprising since its a basement room with laminate flooring and virtually no soft, absorbant surfaces at all. I planned to add acoustic treatment from the get go, but funding made me wait until now. I've been recording doing DI guitars, drum samples, and synth programming, but nothing acoustic yet until the room gets tamed a little bit.

(note: I get pretty explanatory about why bass traps matter in the next several paragraphs. If you only care about the measurement stuff, skip to below the pictures.)

Well, how do we know what needs taming? First there are some rules of thumb. My room is about 13'x11'x7.5' which isn't an especially large space. This means that sound waves bouncing off the walls will have some strong resonances at 13', 11', and 7.5' wavelengths which equates to about 86Hz, 100Hz, and 150Hz respectively. There will be many more resonances, but these will be the strongest ones. These will become standing waves where the walls just bounce the acoustic energy back and forth and back and forth and back and forth... Not forever, but longer than the other frequencies in my music.

For my room, these are very much in the audible spectrum so this acoustic energy hanging around in the room will be covering other stuff I want to hear (for a few hundred extra ms) while mixing. In addition to these primary modes there will also be resonances at 2x, 3x, 4x, etc. of these frequencies. Typically the low end is where it tends to get harder to hear what's going on, but all the reflections add up to the total reverberance which is currently a bit too much for my recording.

Remember acoustic waves are switching (or waving even) between high pressure/low speed and low pressure/high speed. Where the high points lie depends on the wavelength (and the location of the sound source). At the boundaries of the room, the air carrying the primary modes' waves (theoretically) doesn't move at all. That means the pressure is the highest there. At the very middle of the room you have a point where air carrying these waves is moving the fastest. Of course the air is usually carrying lots of waves at the same time so how its moving/pressurized in the room is hard to predict exactly.

With large wavelengths like the ones we're most worried about, you aren't going to stop them with a 1" thick piece of foam hung on the wall (no matter how expensive it was). You need a longer space to act on the wave and trap more energy. With small rooms more or less the only option is through porous absorbers which basically take acoustic energy out of the room when air carrying the waves tries to move through the material of the treatment. Right against the wall air is not moving at all, so putting material there isn't going to be very effective for the standing waves. And only 1" of material isn't going to act on very much air. So you need volume of material and you need to put it in the right place.

Basically thicker is better to stop these low waves.  If you have sufficient space in your room put a floor-to-ceiling 6' deep bass trap. But most of us don't have that kind of space to give up. The thicker the panel the less dense of material you should use. Thick traps will also stop higher frequencies, so basically, just focus on the low stuff and the higher will be fine. Often if the trap is not in a direct reflecting point from the speaker then its advised to glue kraft paper to the material which bounces some of the ambient high end around the room so its not too dead. How dead is too dead? How much high end does each one bounce? I don't know. It's just a rule of thumb. The rule for depth is quarter wavelength. An 11' wave really will be stopped well by a 2.75' thick trap. This thickness guarantees that there will be some air moving somewhere through the trap even if you put it right in the null. Do you have a couple extra feet of space to give up all around the room? Me neither. But we'll come back to that. Also note that more surface area is more important than thickness. Once you've covered enough wall/floor/ceiling, then the next priority is thickness.

Next principle is placement. You can place treatment wherever you want in the room but some places are better than others. Right against the wall is ok because air is moving right up until the wall, but it will be better if there is a little gap, because the air is moving faster a little further from the wall. So we come back to the quarter wavelength rule. The most effective placement of a panel is spaced equal to its thickness. So a 3" panel is best 3" away from the wall. This effectively doubles the thickness of your panel. Thus we see placement and thickness are related. Now your 3" panel is acting like its 6" damping pretty effectively down to 24" waves (~563Hz). It also works well on all shorter waves. Bass traps are really broadband absorbers. But... 563Hz is a depressingly high frequency when we're worried about 80Hz. This trap will do SOMETHING to even 40Hz waves, but not a whole lot. What do we do if our 13' room mode is causing a really strong resonance?

You can move your trap further into the room. This makes it so there is a gap in the absorption curve, but it makes the absorption go lower. So move the 3" panel to have a 6" gap and you won't be as effective at absorbing 563Hz but now it works much better on 375Hz. You are creating a tuned trap. It still works some on 563Hz but the absorption curve will have a low point then a bump at 375. Angling the trap so the gap varies can help smooth this response making it absorb more frequencies, but less effectively for specific ones. So tradeoff smooth curve for really absorbing a lot of energy at a specific frequency if you need.

The numbers here are pretty thoretical. Even though the trap is tuned to a certain frequency a lot of other frequencies will get absorbed. Some waves will enter at angles which makes it seem thicker. Some waves will bounce off. Some waves will diffract (bend) around the trap somewhat. There are so many variables that its very difficult to predict acoustics precisely. But these rules of thumb are applicable in most cases.

Final thing to discuss is what material? Its best to find one that has been tested with published numbers because you have a good idea if and how it will work. Mineral wool is a fibrous material that resists air passing through. Fiberglass insulation can work too. Rigid fiberglass Owens Corning 703 is the standard choice but mineral wool is cheaper and just as effective so its becoming more popular. Both materials (and there are others) come in various densities, and the idea comes into play that thicker means less dense. This is because if it's too dense acoustic waves could bounce back out on their way through rather than be absorbed.

Man. I didn't set out to give a lecture on acoustics, but its there and I'm not deleting it. I do put the bla in blog, remember? There's a lot more (and better) reading you can do at an acoustic expert's site.

For me and my room (and my budget) I started out building two 9" deep 23" wide floor to ceiling traps for the two corners I have access to (The other 2 corners are blocked by the door and my wife's sewing table). These will be stuffed with Roxul Safe and Sound (SnS) which is a lower density mineral wool. Its available on Lowes online, but it was cheaper to find a local supplier to special order it for me.

 Roxul compresses it in the packaging nicely

I will build a 6"x23" panel using whatever's left and will place it behind the listening position. I also ordered a bag of the denser Roxul Rockboard 60 (RB60). I'm still waiting for it to come in (rare stuff to find in little Logan UT, but I found a supplier kind enough to order it and let me piggy back on their shipping container so I'm not paying any shipping, thanks Building Specialties!). I will also build four 4"x24"x48" panels out of Roxul Rockboard 60 (when it finally arrives) which is a density that more or less matches the performance of OC703.  These will be hung on the walls at the first reflecting points and ceiling corners. Next year or so when I have some more money I plan to buy a second bag of the rockboard which will hopefully be enough treatment to feel pretty well done. I considered using the 2" RB60 panels individually so I can cover more surface (which is the better thing acoustically), but in the end I want 4" panels and I don't know if it will be feasible to rebuild these later to add thickness.
 my stack of flashing

I more or less followed Steven Helm's method with some variations. The stuff he used isn't very available so I bought some 20 gauge 1.5" galvanized L-framing or angle flashing from the same local supply shop who got me . They had 25ga. but I was worried it would be too flimsy, considering even on the rack a lot of it got bent. I just keep envisioning my kids leaning against them or something and putting a big dent on the side. After buying I worried it would be too heavy, but now after the build, I think for my towering 7.5' bass traps, the thicker material was a good choice. For the smaller 2'x4' panels that are going to be hung up, I'm not sure yet.

I chose not to do a wood trap because I thought riveting would be much faster than nailing where I don't have a compressor yet. Unfortunately I didn't forsee how long it can take to drill through 20ga steel. I found after the first trap its much faster to punch a hole with a nail then drill it to the rivet size. Its nice when you have something to push against (a board underneath) but where I was limited on workspace I sometimes had to drill sideways. A set of vice-grip pliers really made that much easier.

Steven's advice about keeping it square is very good, something I didn't do the best at on the first trap, but not too far off either. They key is using the square to keep your snips cutting squarely. Also since my frame is so thick it doesn't bend very tightly, so I found it useful to take some pliers and twist the corner a bit to square it up.
 Corner is a bit round

 a bit tighter corner now
Since my traps are taller than as single SnS panel I had to stack them and cut a 6" off the top. A serrated knife works best for cutting this stuff but I didn't have an old one around, so I improvised one from some scrap sheet metal.

I staggered the seams to try to make a more homogenous material.

With all the interior assembled I think the frames actually look good enough you could keep them on the outside, but my wife preferred the whole thing be wrapped in fabric. I don't care either way.

Before covering though I glued on some kraft paper using spray adhesive. I worked from top to bottom, but some of them got a bit wrinkled.

The paper was a bit wider than the frame, so I cut around the frame and stuffed it behind a bit, so it has a tidier look.

I'd say they look pretty darn good even without fabric!

Anyway, so all that acoustic blabber above boils down to the fact that even following rules of thumb, the best thing to do is measure the room before and after treatment to see what needs to be treated and how well your treatment did. If its good leave it, if its bad you can add more or try to move it around to address where its performing poorly.

So as measuring is important, and I'm kinda a stickler for open source software I will show you today how to do it. The de-facto standard for measurement is the Room Eq Wizard (REW) freeware program. Its free but not libre, so I decided to use what was libre. Full disclosure: I installed REW and tried it but couldn't ever get sound to come out of it, so that helped motivate the switch. I was impressed REW had a linux installer, but I couldn't find any answers on getting sound out. Its java based, not JACK capable, so it couldn't talk to my firewire soundcard. REW is very good, but for the freedom idealists out there we can use Aliki.

The method is the same in both, generate a sweep of sine tones with your speakers, record the room's response with your mic, and do some processing that creates an impulse response for your room. An impulse signal is a broadband signal that contains all frequencies equally for a very very (infinitely short) amount of time. True impulses are difficult to generate so its easier to just send the frequencies one at a time then combine them with some math. I've talked a little about measuring impulse responses before. The program I used back then (qloud) isn't compiling easily for me these days because it hasn't been updated for modern QT libraries and Aliki is more tuned for room measurement vs. loudspeaker measurement.

I am most interested in 2 impulse responses: 1. the room response between my monitors and my ears while mixing, and 2. the room response between my instruments and the mic. Unfortunately I can't take my monitors or my mic out of the measurement because I don't have anything else to generate or record the sine sweeps with. So each measurement will have these parts of my signal chain's frequency response convolved in too, but I think they are flat enough to get an idea and they'll be consistent for before and after treatment comparisons. I don't have a planned position for where I will be recording in this room but the listening position won't be moving so I'm focused on response 1.

The Aliki manual linked above is pretty good. For the most part I'm not going to rehearse it here. You make select a project location, and I found that anywhere but your home directory didn't work. It makes 4 folders in that location to store different audio files: sweep, capture, impulse, and edited files.

We must first make a sweep, so click the sweep button. I'm going from 20Hz to 22000Hz. May as well see the full range, no? A longer sweep can actually reduce the noise of the measurement, so I went a full 15 seconds. This generates an audio file with the sweep in it in the sweep folder. Aliki stores everything as .ald files, basically a wav with a simpler header I think.

Next step: capture. Set up your audio input and output ports, and pick your sweep file for it to play. Use the test to get your levels. I found that even with my preamps cranked the levels were low coming in from my mic. It was night so I didn't want to play it much louder. You can edit the captures if you need. Each capture makes a new file or files in the capture directory.

I did this over several days because I measured before treatment, then with the traps in place before the paper was added and again after the paper was glued on. Use the load function to get your files and it will show them in the main window. Since my levels were low I went ahead and misused the edit functions to add gain to the capture files so they were somewhat near full swing.

Next step is the convolution to remove the sweep and calculate the impulse response. Select the sweep file you used, set the end time to be longer than your sweep was and click apply and it should give you the impulse response. Be aware that if your levels are low like mine were, you'll only get the tiniest blip of waveform near zero. Save that as a new file and then go to edit.

In edit, you'll likely need to adjust the gain, but you can also adjust the length, and in the end you have a lovely impulse response that you can export to a .wav file that you can listen to (though its not much to listen to) or more practically: use in your favorite impulse response like IR or klangfalter.

But we don't want to use this impulse for convolving signals with. We can already get that reverb by just playing an instrument in our room! We want to analyze the impulse response to see if there's improvement or if something still needs to be changed. So this is where I imported the IR wav files into GNU Octave.

I wrote a few scripts to help out, namely: plotIREQ and plotIRwaterfall. They can be found in their git repository. I also made fftdecimate which smooths it out from the raw plotIREQ plot:

to this:

I won't go through the code in too much detail. If you'd like me to, leave a comment and I'll do another post. But look at plotMyIRs.m for useage examples of how I generated these plots.

You can see the big bump from around 150hz to 2khz. And a couple big valleys at 75hz, 90hz, 110hz etc. One thing I decided from looking at these is that the subwoofer should be turned up a bit, since my Blue Sky Exo2's crossover at around 150hz, and everything below that measured rather low.

I was hoping for a smoother result, especially in the low end, but I plan to build more broadband absorbers for the first reflection points. While a 4" thick panel doesn't target the really low end like these bass traps, they do have some effect, even on the very low frequencies. So I hope they'll have a cumulative effect down on that lower part of the graph.

The other point that I'd like to comment on is that the paper didn't seem to make much of a difference. Its possible that since it wasn't factory glued onto the rockwool it lacks a sufficient bond to transfer the energy properly. It doesn't seem to hurt the results too much either, in fact around 90hz it seems like it actually makes the response smoother, so I don't plan to remove it (yet at least).

The last plots I want to look at is the waterfall plots. These show how the frequencies are responding in time so you will see if any frequencies are ringing/resonating and need better treatment.

Here we see some anomolies. Just comparing the first and final plots, its easy to see that nearly every frequency decays much more quickly (we're focused on the lower region 400hz and below, since thats where the rooms primary modes lie). You also see a long resonance somewhere around 110hz that still isn't addressed, which is probably the next target. I can try to move the current traps out from the wall and see if that helps, or make a new panel and try to tune it.

Really though I'm probably going to wait until I've built the next set of panels.
Hope this was informative and useful. Try out those octave scripts. And please comment!

## May 28, 2016

### A touch of music

#### Modeling rhythms using numbers - part 2

This is a continuation of my previous post on modeling rhythms using numbers.

## Euclidean rhythms

The Euclidean Rhythm in music was discovered by Godfried Toussaint in 2004 and is described in a 2005 paper "The Euclidean Algorithm Generates Traditional Musical Rhythms". The greatest common divisor of two numbers is used rhythmically giving the number of beats and silences, generating the majority of important World Music rhythms.

## Do it yourself

You can play with a slightly generalized version of euclidean rhythms in your browser  using a p5js based sketch I made to test my understanding of the algorithms involved. If it doesn't work in your preferred browser, retry with google chrome.

## The code

The code may still evolve in the future. There are some possibilities not explored yet (e.g. using ternary number systems instead of binary to drive 3 sounds per circle). You can download the full code for the p5js sketch on github

 screenshot of the p5js sketch running. click the image to enlarge

## The theory

So what does it do and how does it work? Each wheel contains a number of smaller circles. Each small circle represents a beat. With the length slider you decide how many beats are present on a wheel.

Some beats are colored dark gray (these can be seen as strong beats), whereas other beats are colored white (weak beats). To strong and weak beats one can assign a different instrument. The target pattern length decides how many weak beats exist between the strong beats. Of course it's not always possible to honor this request: in a cycle with a length of 5 beats and a target pattern length of 3 beats (left wheel in the screenshot) we will have a phrase of 3 beats that conforms to the target pattern length, and a phrase consisting of the 2 remaining beats that make a "best effort" to comply to the target pattern length.

Technically this is accomplished by running Euclid's algorithm. This algorithm is normally used to calculate the greatest common divisor between two numbers, but here we are mostly interesting in the intermediate results of the algorithm. In Euclid's algorithm, to calculate the greatest common divisor between an integer m and a smaller integer n, the smaller number n is repeatedly subtracted from the greater until the greater is zero or becomes smaller than the smaller, in which case it is called the remainder. This remainder is then repeatedly subtracted from the smaller number to obtain a new remainder. This process is continued until the remainder is zero. When that happens, the corresponding smaller number is the greatest common divisor between the original two numbers n and m.

Let's try it out on the situation of the left wheel in the screenshot. The greater number m is 5 (length) and the smaller number n is 3 (target pattern length). Now the recipe says to repeatedly subtract 3 from 5 until you get something smaller than 3. We can do this exactly once:

5 - (1).3 = 2

We can rewrite this as:

5 = (1).3 + 2

This we can interpret as: the cycle of 5 beats is to be decomposed as 1 phrase with 3 beats, followed by a phrase with 2 beats (the remainder). Each phrase consists of a single strong beat followed by all weak beats. In a symbolic representation easier read by musicians one might write: x..x. (In the notation of the previous part of this article one could also write 10010).

Euclid's algorithm doesn't stop here. Now we have to repeatedly subtract the remainder 2 from the smaller number 3:

3 = (1).2 + 1

This in turn can be read as: the phrase of 3 beats can be further decomposed as 1 phrase of 2 beats followed by a phrase consisting of 1 beat. In a symbolic representation: x.x Euclid continues:

2 = (2).1 + 0

The phrase of two beats can be represented symbolically as: xx. We've reached remainder 0 and Euclid stops: apparently the greatest common divisor between 5 and 3 is 1.

Now it's time to realize what we really did:
• We decomposed a phrase of 5 beats in a phrase of 3 beats and a phrase of 2 beats making a rhythm x..x.
• Then we further decomposed the phrase of 3 beats into a phrase of 2 beats followed by a phrase of 1 beat.
• We can substitute this refined 3 beat phrase in our original rhythm of 5 = 3+2 beats to get a rhythm consisting of 5 = (2 + 1) + 2 beats: x.xx.
• I hope it's clear by now that by choosing how long to continue using Euclid's algorithm, we can decide how fine-grained we want our rhythms to become.
• This is where the max pattern length slider comes into play.
The length slider and the target pattern slider will determine a rough division between strong and weak beats by running Euclid's algorithm just once, whereas the max pattern length slider helps you decide how long to carry on Euclid's algorithm to further refine the generated rhythm.

## April 03, 2016

### Midichlorians in the blood

#### Taking Back From Android

Android is an operating system developed by Google around the Linux kernel. It is not like any other Linux distribution, because not only many common subsystems have been replaced by other components, but also the user interface is radically different based on Java language running into a virtual machine called Dalvik.

An example of subsystem removed from the Linux kernel is the ALSA Sequencer, which is a key piece for MIDI input/output with routing and scheduling that makes Linux comparable in capabilities to Mac OSX for musical applications (for musicians, not whistlers) and years ahead of Microsoft Windows in terms of infrastructure. Android did not offer anything comparable until Android 6 (Marshmallow).

Another subsystem from userspace Linux not included in Android is PulseAudio. Instead, OpenSL ES that can be found on Android for digital audio output and input.

But Android also has some shining components. One of them is Sonivox EAS (originally created by Sonic Network, Inc.) released under the Apache 2 license, and the MIDI Synthesizer used by my VMPK for Android application to produce noise. Funnily enough, it provided some legal fuel to Oracle in its battle against Google, because of some Java binding sources that were included in the AOSP repositories. It is not particularly outstanding in terms of audio quality, but has the ability of providing real time wavetable GM synthesis without using external soundfont files, and consumes very little resources so it may be indicated for Linux projects on small embedded devices. Let's take it to Linux, then!

So the plan is: for the next Drumstick release, there will be a Drumstick-RT backend using Sonivox EAS. The audio output part is yet undecided, but for Linux will probably be PulseAudio. In the same spirit, for Mac OSX there will be a backend leveraging the internal Apple DLS synth. These backends will be available in addition to the current FluidSynth one, which provides very good quality, but uses expensive floating point DSP calculations and requires external soundfont files.

Meanwhile, I've published on GitHub this repository including a port of Sonivox EAS for Linux with ALSA Sequencer MIDI input and PulseAudio output. It also  depends on Qt5 and Drumstick. Enjoy!

Sonivox EAS for Linux and Qt:
https://github.com/pedrolcl/Linux-SonivoxEas

Related Android project:
https://github.com/pedrolcl/android/tree/master/NativeGMSynth

## March 16, 2016

### Talk Unafraid

#### The Investigatory Powers Bill for architects and administrators

OK, it’s not the end of the world. But it does change things radically, should it pass third reading in its current form. There is, right now, an opportunity to effect some change to the bill in committee stage, and I urge you to read it and the excellent briefings from Liberty and the Open Rights Group and others and to write to your MP.

Anyway. What does this change in our threat models and security assessments? What aspects of security validation and testing do we need to take more seriously? I’m writing this from my perspective, which is from a small ISP systems perspective, but this contains my personal views, not that of my employer, yada yada.

## The threats

First up let’s look at what the government can actually do under this bill. I’m going to try and abstract things a little from the text in the bill, but essentially they can:

• Issue a technical capability notice, which can compel the organization to make technical changes to provide capability to provide a service to government
• Compel an individual (not necessarily within your organization) to access data
• Issue a retention notice, which can compel the organization to store data and make it available through some mechanism
• Covertly undertake equipment interference (potentially with the covert, compelled assistance of someone in the organization, potentially in bulk)

Assuming we’re handling some users’ data, and want to protect their privacy and security as their bits transit the network we operate, what do we now need to consider?

• We can’t trust any individual internally
• We can’t trust any small group of individuals fully
• We can’t trust the entire organization not to become compromised
• We must assume that we are subject to attempts at equipment interference
• We must assume that we may be required to retain more data than we need to

So we’re going to end up with a bigger threat surface and more attractors for external threats (all that lovely data). We’ve got to assume individuals may be legally compelled to act against the best interests of the company’s users – this is something any organization has to consider a little bit, but we’ve always been viewing this from a perspective of angry employees the day they quit and so on. We can’t even trust that small groups are not compromised and may either transfer sensitive data or assist in compromise of equipment

Beyond that, we have to consider what happens if an organizational notice is made – what if we’re compelled to retain none-sampled flow data, or perform deep packet inspection and retain things like HTTP headers? How should we defend against all of this, from the perspective of keeping our users safe?

### Motivation

To be clear – I am all for targeted surveillance. I believe strongly we should have well funded, smart people in our intelligence services, and that they should operate in a legal fashion, with clear laws that are up to date and maintained. I accept that no society with functional security services will have perfect privacy.

I don’t think the IPB is the right solution, mind you, but this is all to say that there will always be some need for targeted surveillance and equipment interference. These should be conducted only when a warrant is issued (preferably by a judge and cabinet minister), and ISPs should indeed be legally able to assist in these cases, which requires some loss of security and privacy for those targeted users – and it should be only those users.

I am a paid-up member of the Open Rights Group, Liberty and the Electronic Frontiers Foundation. I also regularly attend industry events in the tech sector and ISP sector in particular. Nobody wants to stop our spies from spying where there’s a clear need for them to do so.

However, as with all engineering, it’s a matter of tradeoffs. Bulk equipment interference or bulk data retention is complete overkill and helps nobody. Covert attacks on UK infrastructure actively weaken our security. So how do we go about building a framework that permits targeted data retention and equipment interference in a secure manner?  Indeed, encourages it at an organizational level rather than forcing it to occur in a covert manner?

## Equipment Interference

This is the big one, really. Doesn’t matter how it happens – internally compelled employee, cleaner plugging a USB stick from a vague but menacing government agency into a server and rebooting it, switches having their bootloaders flashed with new firmware as they’re shipped to you, or covert network intrusion. Either way you end up in a situation where what your routers, switches, servers etc are doing things you did not expect, almost certainly without your knowledge.

This makes it practically impossible to ensure they are secure, against any threats. Sure, your Catalyst claims to be running IOS 13.2.1. Your MX-series claims to be running JunOS 15.1. Can we verify this? Maybe. We can use host-based intrusion detection systems to monitor integrity and raise alarms.

Now, proper auditing and logging and monitoring of all these devices, coupled with change management etc will catch most of the mundane approaches – that’s just good infosec, and we have to do that to catch all the criminals, script kiddies and random bots trawling the net for vulnerable hosts. Where it gets interesting is how you protect against the sysadmin themselves.

It feels like we need to start implementing m-in-n authorization to perform tasks around sensitive hosts and services. Some stuff we should be able to lock down quite firmly. Reconfiguring firewalls outside of the managed, audited process for doing so using a configuration management (CM) tool? Clearly no need for this, so why should anyone ever be able to do it? All services in CM, be it Puppet/Salt/Chef, with strongly guarded git and puppet repositories and strong authentication everywhere (keys, proper CA w/ signing for CM client/server auth, etc)? Then why would admins ever need to log into machines? Except inevitably someone does  need to, and they’ll need root to diagnose whatever’s gone wrong, even if the fix is in CM eventually.

We can implement 2-person or even 3-person authentication quite easily, even at small scales, using physical tools – hardware security modules locked in 2-key safes, or similar. But it’s cumbersome and complicated, and doesn’t work for small scales where availability is a concern – is the on-call team now 3 people, and are they all in the office all the time with their keys?

There’s a lot that could be done to improve that situation in low to medium security environments, to stop the simple attacks, to improve the baseline for operational security, and crucially to surface any covert attempts at EI conducted by an individual or from outside, covertly. Organizationally, it’d be best for everyone if the organization were aware of modifications that were required to their equipment.

From a security perspective, a technical capability notice or data retention notice of some sort issued to the company or group of people at least means that a discussion can be had internally. The organization may well be able to assist in minimising collateral damage. Imagine: “GCHQ needs to look at detailed traffic for this list of 10 IPs in an investigation? Okay, stick those subscribers in a separate VLAN once they hit the edge switches, route that through the core here and perform the extra logging here for just that VLAN and they’ve got it! Nobody else gets logged!” rather than “hey, why is this Juniper box suddenly sending a few Mbps from its management interface to some IP in Gloucestershire? And does anyone know why the routing engines both restarted lately?”

## Data Retention

This one’s actually pretty easy to think about. If it’s legally compelled by a retention or technical capability notice, you must retain as required, and store it as you would your own browser history – in a write-only secure enclave, with vetted staff, ISO27K1 compliant processes (plus whatever CESG requires), complete and utter segmentation from the rest of the business, and whatever “request filter” the government requires stays in there with dedicated, highly monitored and audited connectivity.

What’s that, you say? The government is not willing to pay for all that? The overhead of such a store for most small ISPs (<100,000 customers) would be huge. We’re talking millions if not more per small ISP (ispreview.co.uk lists 237 ISPs in the UK). Substantial office space, probably 5 non-technical and 5 technical staff at minimum, a completely separate network, data diodes from the collection systems, collection systems themselves, redundant storage hardware, development and test environments, backups (offsite, of course – to your second highly secure storage facility), processing hardware for the request filter, and so on. Just the collection hardware might be half a million pounds of equipment for a small ISP. If the government start requiring CESG IL3 or higher, the costs keep going up. The code of practice suggests bulk data might just be held at OFFICIAL – SENSITIVE, though, so IL2 might be enough.

The biggest risk to organizations when it comes to data retention is that the government might not cover your costs – they’re certainly not required to. And of course the fact that you’re the one to blame if you don’t secure it properly and it gets leaked. And the fact that every hacker with dreams of identity theft in the universe now wants to hack you so bad, because you’ve just become a wonderfully juicy repository of information. If this info got out, even for a small ISP, and we’re talking personally-identifiable flow information/IP logs – which is what “Internet Connection Records” look/sound like, though they’re still not defined – then Ashley Madison, TalkTalk and every other “big data breach” would look hilariously irrelevant by comparison. Imagine what personal data you could extract from those 10,000 users at that small ISP! Imagine how many people’s personal lives you could utterly destroy, by outing them as gay, trans or HIV positive, or a thousand other things. All it would take is one tiny leak.

You can’t do anything to improve the security/privacy of your end users – at this point, you’re legally not allowed to stop collecting the data. Secure it properly and did I mention you should write to your MP while the IPB is at committee stage?

If you’ve not been served with a notice: carry on, business as usual, retain as little as possible to cover operational needs and secure it well.

## Auditing

Auditing isn’t a thing that happens enough.

I always think that auditing is a huge missed opportunity. We do pair programming and code review in the software world, so why not do terminal session reviews? If X logs into a router, makes 10 changes and logs out, yes we can audit the config changes and do other stateful analysis, but we can audit those commands as a single session. It feels like there’s a tool missing to collate logs from something like syslog and bring them together as a session, and then expose that as a thing people can look over, review, and approve or flag for discussion.

It’s a nice way for people to learn, too – I’ve discovered so many useful tools from watching my colleagues hack away at a server, and about the only way I can make people feel comfortable working with SELinux is to walk them through the quite friendly tools.

Auditing in any case should become a matter of course. Tools like graylog2, ELK and a smorgasbord of others allow you to set up alerts or streams on log lines – start surfacing things like root logins, su/sudo usage, and “high risk” commands like firmware updates, logging configuration, and so on. Stick a display on your dashboards.

Auditing things that don’t produce nice auditable logs is of course more difficult – some firewalls don’t, some appliances don’t. Those just need to be replaced or wrapped in a layer that can be audited. Web interface with no login or command audit trail? Stick it behind a HTTPS proxy that does log, and pull out POSTs. Firewall with no logging capability? Bin it and put something that does in. Come on, it’s 2016.

## Technical capability notices and the rest

This is the unfixable. If you get handed a TCN, you basically get to do what it says. You can appeal on the grounds of technical infeasibility, but not proportionality or anything like that. So short of radically decentralizing your infrastructure to make it technically too expensive for the government, you’re kind of stuck with doing what they say.

The law is written well enough to prevent obvious loopholes. If you’re an ISP, you might consider encryption – you could encrypt data at your CPEs, and decrypt it on your edge. You could go a step further and not decrypt it at all, but pass it to some other company you notionally operate extraterritorially, who decrypt it and then send it on its way from there. But these come with potentially huge cost, and in any case the TCN can require you to remove any protection you applied or are in a position to remove if practical.

We can harden infrastructure a little – things like using n-in-m models, DNScrypt for DNS lookups from CPEs, securely authenticating provisioning servers and so on. But there is no technical solution for a policy problem – absolutely any ISP, CSP, or 1-man startup in the UK is as powerless as the next if the government rocks up with a TCN requiring you to store all your customers’ data or to install these black boxes everywhere your aggregation layer connects to the core or whatever.

Effectively, then, the UK industry is powerless to prevent the government from doing whatever the hell it likes, regardless of security or privacy implications, to our networks, hardware and software. We can take some steps to mitigate covert threats or at least give us a better chance of finding them, and we can make some changes which attempt to mitigate against compelled (or hostile) actors internally – there’s an argument that says we should be doing this anyway.

And we can cooperate with properly-scoped targeted warrants. Law enforcement is full of good people, trying to do the right thing. But their views on what the right thing to do is must not dictate political direction and legal implementation while ignoring the technical realities. To do so is to doom the UK to many more years with a legal framework which does not reflect reality, and actively harms the security of millions of end users.

## February 29, 2016

### Audio, Linux and the combination

#### MOD DUO has arrived !

Hi all !

I know that it has been a loooong time since i posted anything but i do have a life you know ;-)

Anyway, i just wanted to share that my MOD DUO arrived and my son an I made a little MOD DUO unboxing video about it !

Great device, really nice build and so far the interface just blew me away !
I plan on doing some more vids on the MOD, but no promises !

Enjoy !

## November 16, 2015

### m3ga blog

#### Forgive me Curry and Howard for I have Sinned.

Forgive me Curry and Howard for I have sinned.

For the last several weeks, I have been writing C++ code. I've been doing some experimentation in the area of real-time audio Digital Signal Processing experiments, C++ actually is better than Haskell.

Haskell is simply not a good fit here because I need:

• To be able to guarantee (by inspection) that there is zero memory allocation/de-allocation in the real-time inner processing loop.
• Things like IIR filters are inherently stateful, with their internal state being updated on every input sample.

There is however one good thing about coding C++; I am constantly reminded of all the sage advice about C++ I got from my friend Peter Miller who passed away a bit over a year ago.

Here is an example of the code I'm writing:

  class iir2_base
{
public :
// An abstract base class for 2nd order IIR filters.
iir2_base () ;

// Virtual destructor does nothing.
virtual ~iir2_base () { }

inline double process (double in)
{
unsigned minus2 = (minus1 + 1) & 1 ;
double out = b0 * in + b1 * x [minus1] + b2 * x [minus2]
- a1 * y [minus1] - a2 * y [minus2] ;
minus1 = minus2 ;
x [minus1] = in ;
y [minus1] = out ;
return out ;
}

protected :
// iir2_base internal state (all statically allocated).
double b0, b1, b2 ;
double a1, a2 ;
double x [2], y [2] ;
unsigned minus1 ;

private :
// Disable copy constructor etc.
iir2_base (const iir2_base &) ;
iir2_base & operator = (const iir2_base &) ;
} ;



## November 06, 2015

### Nothing Special

#### Easy(ish) Triple Boot on 2014 Macbook Pro

UPDATE* Feel free to read, but I've since renounced this process and made a new one here.
Nothing is easy. Or perhaps everything is. Regardless, here is how I did it, but first a little backstory:

I got a macbook pro 11,3 from work. I wanted a lenovo, but the boss wants me to do some iOS stuff eventually. Thats fine, cause I can install linux just as easily on whatever. Oh wait.. There are some caveats. Boot Camp seems to be a little picky. Just as well. MIS clowns set up boot camp so I had windows 7 and Yosemite working, but they told me I'm on my own for linux. It seems from the posts I've read about triple booting is that you have to plan it out from the get-go of partitioning, not just add it in as an afterthought. But I also found suggestions about wubi.

I've used wubi and didn't really understand what it did, but its actually perfect for setting up a triple boot system in my situation (where it's already dual boot and I want to tack on linux and ignore the other two). There is a lot of misunderstanding that wubi is abandoned and no longer supported bla bla. The real story is that the way wubi works doesn't play nicely with windows 8. Therefore if it doesn't work for everybody Ubuntu doesn't want to advertise it as an option. Its there, but they'd rather have everyone use the most robust method known: full install from the live cd/usb. Not that wubi is rickety or anything, but only works in certain situations (windows 7 or earlier). The reality is its on every desktop ISO downloaded, including latest versions (more on that later).

The way wubi works is important to note too (and its the reason that its perfect for this situation). Wubi creates a virtual disk inside the NTSC (windows) partition of the disk. So instead of dividing the hard drive space into two sections (one for linux, one for windows, and/or a third for OSX if triple boot) it doesn't create disk partitions at all,  just a disk file inside the existing windows partition. The windows bootloader is configured to open the windows partition then mount this file as another disk in whats called a loopback mode. This is distinctly contrasted to a virtualized environment where often a virtual disk is running on virtual hardware. You are using your actual machine, just your disk is kinda configured in a unique but clever way.

The main downside it sounds like is that you could have poor disk performance. It sounds like in extreme cases, VERY poor performance. Since this machine was intended for development its maxed out with 16GB ram, so I'm not even worrying about swap, and the 1TB hdd has plenty of space for all 3 OSes and its a fresh install so shouldn't be too fragmented. These are the best conditions for wubi. So far it seems to be working great. Install took a little trial and error though.

So I had to at least TRY to teach you something before giving you the recipe, but here goes:

1. I had to install bootcamp drivers in windows. MIS should have done that but they're clowns. You'll have to learn that on your own. There are plenty of resources for those poor mac users. This required a boot into OSX.
2. Boot into windows.
3. Use the on screen keyboard in the accessibility options of the windows to be able to hit ctl+alt+delete to make up for the flaw that macbooks have no delete key (SERIOUSLY?) Also don't get me started on how I miss my lenovo trackpoints.
4. I installed sharpkeys to remap the right alt to be a delete key so I could get around this in the future. I know sooner or later Cypress will make me boot into windoze.
5. Download the Ubuntu desktop live CD ISO (I did the most recent LTS. I'm not in school any more, gone are the days where I had time to change everything every 6 months).
6. In windows install something that will let you mount the ISO in a virtual cd drive. You could burn it to CD or make a live USB, but this was the quickest. I used WinCDEmu as it's open source.
7. Mount the ISO and copy wubi.exe off of the ISO's contents and into whatever directory the ISO is actually in (i.e. Downloads).
8. Unmount the ISO. This was not obvious to me and caused an error in my first attempt.
9. Disable your wifi. This was not obvious to me and caused an error in my second attempt. This forces wubi to look around and find the ISO that is in the same folder rather than try to re-download another ISO.
10. Run wubi.exe .
11. Pick your install size, user name, all that. Not that it matters but I just did vanilla ubuntu since I was going to install i3 over the Unity DE anyway. Historically I always like to do it with xubuntu, but I digress.
12. Hopefully I haven't forgotten any steps, but that should run and ask you to reboot. (I'd re-enable the wifi before you do reboot, or else you'll forget like I did and wonder why its broken next windows boot).
13. The reboot should complete the install and get you into ubuntu.
14. I believe the next time you reboot it will not work. For me it did not. Its due to a grub2 bug I understand. Follow the solutions in these two threads:
15. To roughly summarise the process, hit the e key to edit the grub config that will try to load ubuntu. Edit the line
linux /boot/vmlinuz-3.13.0-24-generic root=UUID=bunchofhexidec loop=/ubuntu/disks/root.disk ro

ro should be changed to rw. This will allow you to boot. The first post tells you to edit an auto-generated file. Thats silly. what happens when it gets auto-generated and again and overwrites your fix? It even says not to edit it in the header. Instead you need to make a similar change to the file that causes it to have that error and then generate those files again as described in the second link.
16. Once that is sorted out you'll probably notice that the wifi is not working. You can either use an ethernet port adapter or a USB wifi card (or figure out another way) but get internet somehow and install bcmwl-kernel-source and it should start working (maybe after a logout. I don't remember).
17. Another tweak you will need is that this screen has a rediculously high DPI so the default fonts are all teensy-tiny. The easiest workaround is just to lower the screen resolution in the displays setting of unity-command-center, but you can also edit the font sizes in that dialog and/or using unity-tweak-tool. I'm still ironing that out. Especially since my secondary monitors are still standard definition. xrandr --scale is my only hope. Or just lower the resolution.
18. You might find that the touchpad click doesn't work as you expect. Try running the command:
and see if you like it better. I sure do. Also enable two finger scrolling in the unity-control-center.
19. Also, importantly, wubi only allows up to 30GB of virtual disk to be created. I wanted a lot more than that. So I booted off a USB live stick I had laying around and followed the instructions here to make it a more reasonable 200GB.
20. Finally install i3-wm, vifm, eclipse, the kxstudio repos and everything else you love about linux.
So I love my macbook. Because its a linux box.

## September 06, 2015

### A touch of music

#### Algorithmic composition: generating tonal canons with Python and music21

 1x spiced up chord progression (intermediate step in canon generation)

## What?

According to wikipedia:
Algorithmic composition is the technique of using algorithms to create music.

Some algorithms that have no immediate musical relevance are used by composers as creative inspiration for their music. Algorithms such as fractals, L-systems, statistical models, and even arbitrary data (e.g. census figures, GIS coordinates, or magnetic field measurements) have been used as source materials.
And:
In music, a canon is a contrapuntal compositional technique that employs a melody with one or more imitations of the melody played after a given duration (e.g., quarter rest, one measure, etc.).

## How?

Last year I wrote a series of articles on an easy method for writing some types of canons:
In at least one of those articles I claimed that the methods described there should be easy to use as basis for automation. Here I automate the method for writing a simple canon as explained in the first of these articles.

## Code?

The code discussed below is available under GPLv3 license on github: https://github.com/shimpe/canon-generator. Similar to the gcc compiler, the music you generate with this program is yours. The GPLv3 license only applies to the code itself.

It depends on free software only: python 2.7, music21 and MuseScore.

## Explanation?

This program generates canons from a given chord progression in a fixed key (no modulations for now, sorry!).

It does NOT generate atonal or experimental music, that is, if you're willing to accept the limitations of the program. It can occasionally generate "grave" errors against common practice rules (e.g. parallel fifths/octaves) -> see further explanation.

It closely follows the method as explained in the  Tutorial on my technique for writing a Canon article referenced above. If you want to understand in detail how it all works, please read that article first, and then come back here. If you just want to experiment with the different program settings, continue :)

One thing is worth explaining in more detail: the article mentions in one of the first steps of the recipe that the composer can start from a chord progression, and "spice it up" to get a choral.  So: how to we get a computer to spice up a chord progression, without building in thousands of composition rules?

I introduced some note transformations that:
• introduce small steps between notes so as to generate something that could be interpreted as a melody
• do not fundamentally alter the harmonic function in the musical context
To accomplish this I replace with a sequence of notes without altering the total duration of the fragment, e.g.
• original note (half note) -> original note (quarter note), neighbouring note (8th note), original note (8th note)
Other transformations  look at the current note and the next note, and interpolate a note in between (again, without changing total duration)
• original note (half note), next note -> original note (quarter note), note between original note and next note (quarter note), next note
A property of generating melodies by spicing up lists of notes using this method is that after spicing up a list of notes, you can spice the spiced up list again to get an even spicier list (more complex melody, both in pitch and in rhythm).

Finally a warning for the sensitive ears: for a composer to write a choral that obeys all the rules of the "common practice" takes years of study and lots of practice. Given the extreme simplicity of the program, the computer doesn't have any of this knowledge and it will happily generate errors against the common practice rules (e.g. parallel fifths and octaves). Not always, but sometimes, as dicated by randomness. Note that this is an area in which the program could be improved, by checking for errors while spicing up and skipping proposed spicings that introduce errors against the common practice rules.

Yet, despite the extreme simplicity of the method, the results can be surprisingly complex and in some cases sound interesting.

## How can I use it?

In its current form, the program is not really easy to install and use, at least if you have no computer experience:
• You need to install the free python programming language from http://www.python.org/download/releases. I recommend using version 2.7. Version 3 and later of python won't work!
• You also need to install the free music21toolkit for computer-aided musicology. Follow the instructions on their website. Music21 provides for vast amounts of music knowledge which would take a long time to write ourselves. I'm using only a fraction of its possibilities in the canon generator.
• Then you need something to visualize and audition the MusicXml that is generated by the program. For our purposes, the free MuseScore program works perfectly.
• Finally you need to get the free canon-gen.py program from the github repository https://github.com/shimpe/canon-generator.
The main function defined near the bottom of the canon-gen.py file contains some parameters you can edit to experiment with the generator:
• chords = "C F Am Dm G C"
#You can insert a new chord progression here.
• scale = music21.scale.MajorScale("C")
#You can define a new scale in which the notes of the chords should be interpreted here
• voices = 5
#Define the number of voices in your canon here
• quarterLength = 2
#Define the length of the notes used to realize the chord progression
#(don't choose them too short, since the automatic spicing-up will make them shorter)
• spice_depth = 1
# Define how many times a stream (recursively) should be spiced up
# e.g. setting 2 will first spice up the chords, then again spice up the already spiced chords.
# scores very quickly become rhytmically very complex for settings > 2
• stacking = 1
# the code can generate multiple versions of the spiced up chord progression, and use those
# versions to create extra voices
# e.g. setting stacking = 2 will turn a 3-voice canon into a 3*2 = 6-voice canon
• voice_transpositions = { VOICE1 : +12, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }
# allow extra octave jumps between voices

## What does it sound like?

This is a simple example generated with the program with settings
• chords = "C F Am Dm G C"
• scale = music21.scale.MajorScale("C")
• voices = 5
• quarterLength = 2
• spice_depth = 1
• stacking = 1
• voice_transpositions = { VOICE1 : 0, VOICE2 : 0, VOICE3 : -12, VOICE4: -24, VOICE5: 0 }

## Ideas for future improvements

I see many possible improvements, most of which are low-hanging fruits. Feel free to jump in and improve the code :D
• fix a known bug related to octaviations in keys other than C
• support modulations, i.e. keep musical key per measure/beat instead of over the complete chord progression
• extend the code to cover the things explained in the later articles: crab and table canons
• see if the method/code can be extended to generate canons at the third, fifth, ...
• smarter spicing up of chord progressions to avoid parallel fifths/octaves (e.g. rejecting a proposed spice if it introduces an error in the overall stream); or use Dmitry Tymoczko's voice leading spaces to ensure better voice leading by construction.
• protect the end chord from getting spiced up
• implement more note transformations, e.g. appogiatura
• experiment with more rhythms
• how can we better spice up the chord progressions without messing up too much of the original harmonies?
• ...

## August 21, 2015

### Talk Unafraid

#### The Dark Web: Guidance for journalists

We had a lot of coverage of “the dark web” with the latest Ashley Madison leak coverage. Because a link to a torrent was being shared via a Tor page (well, nearly – actually most people were passing around the Tor2Web link), journalists were falling over themselves to highlight the connection to the “dark web”, that murky and shady part of the internet that probably adds another few % to your click-through ratios.

So many outlets and journalists – even big outfits like BBC News and The Guardian – got their terminology terribly wrong on this stuff, so I thought I’d slap together some guidance, being somewhat au fait with the technology involved. Journalists are actually most of the reason why these sorts of tools exist in the first place, in fact – if that surprises you, read on…

## The Dark, Deep Internet

What the hell is “the dark web” anyway? Why is it different from the “deep web”? Why, for that matter, does it differ from the “web”?

First up, to clarify: “the dark web” and “darknets” are practically the same thing, and the terms are used interchangeably.

So: The Deep Web and The Web are technically the same. People often refer to the deep web when they are referring to websites (that is, sites on the internet) that are hard to find with normal search engines because they are not linked to in public. Tools like Google depend on being able to follow a chain of links to find a website – if there’s no links that Google can see, it’s not going to get into the Google index, and so will not be searchable. These sites are still on the internet, though, and anyone who is given the link can put that in a perfectly normal browser and reach that site.

The Dark Web, however, refers to a different technical domain. Dark web or “darknet” sites are only reachable using a tool that encrypts and re-routes your traffic, providing a degree of anonymity. These tools we typically call “anonymity networks”, or “overlay networks”, as they run on top of the internet’s infrastructure. You need to be a part of this network to be able to reach content in the “dark web”. The dark web refers to lots of different tools – Tor is the most widely known, but isn’t all about the dark web, as we’ll learn shortly. I2P and Freenet are two other well-known examples of overlay networks. It’s worth noting that these networks don’t interoperate – the Tor darknet can’t talk to the I2P darknet, as they use radically different technical approaches to achieve similar results.

## The Onion Router, Clearnet and Darknet

Tor (The Onion Router) is a peer to peer, distributed anonymization network that uses strong cryptography and many layers of indirection to route traffic anonymously and securely around the world. Most people using Tor are using it as a proxy for “clearnet” sites; others use it to access hidden services. It’s by far the most popular darknet.

From a darknet perspective, clearnet is the real internet, the world wide web we all know and love. The name refers to the fact that information on the clearnet is sent “in the clear”, without any encryption built into the network protocols (unlike darknets, where encryption is built into the underlying network).

Tor is a technical tool, and is used primarily as a network proxy. To use Tor a client is installed, which will connect to the network. This same client can optionally relay traffic from other clients, expanding the network. As of this post there are about 6500 relays in the Tor network, and 3000 bridges – these bridges are not publicly listed, making it hard for hostile governments to block them, and so allowing users in hostile jurisdictions to connect to the network.

The Tor project also provides the Tor Browser Bundle, which is a modified version of Firefox ESR (Extended Support Release) that contains a Tor client and is configured to prevent many de-anonymization attacks that focus on exploiting the client (for instance, forcing non-Tor connections to occur to a site under the attacker’s control using plugins like Flash or WebRTC, allowing correlation between Tor and clearnet traffic to identify users). This is the recommended way to use Tor for browsing if you’re not using TAILS.

TAILS is a project related to Tor that provides a “live system” – a complete operating system that can be started and run from a USB stick. TAILS stands for The Amnesiac Incognito Live System – as the name suggests, it remembers nothing, and does all it can to hide you and your activity. This is by far the most robust tool if you’re aiming to protect your activity online, and is used widely by journalists across the world, as it’s easy to take with you and hide – even in very hostile environments.

## Hiding from the censors

On the internet it’s reasonably easy to find out where a website is hosted, who’s responsible for it, and from there it’s easy for law enforcement to shut it down by contacting the hosts with the right paperwork. It’s also normally quite easy from that point to find out who was running a website and go after them, though there’s plenty of zero-knowledge hosts out there who will accept payment in cash or Bitcoin, ask no questions and so on.

There’s another facet to this – if you’re a government trying to block websites, it’s very easy to look at traffic and spot traffic destined for somewhere you don’t like, and either block it or modify the contents (or simply observe it). This is common practice in countries like Iran, China, Syria, Israel, and quite a lot of the reason why Tor exists – the adoption of this filtering technology by countries like the UK, ostensibly to prevent piracy, limit hate speech or “radical/extremist views”, or to protect children, is driving Tor adoption in the west, too.

Hidden services (and while Tor is the most commonly cited example, other networks support similar functionality) effectively use the same approach they use to hide the origin of traffic destined for the clearnet to hide both the origin and source of traffic between a user and a hidden service. Unless the hidden service itself offers a clue as to its owners or location, then users of that service can’t identify where that hidden service is operated from. Likewise, the operators of the hidden service can’t see where their users come from. Traffic between the two ends meets in the middle at a randomly picked rendezvous point, which also has no knowledge of what’s being transferred or where it’s come from or going to.

This allows for the provision of services within the darknet entirely, removing the need for the clearnet. This has many advantages – mainly, if your Tor exit node for a session happens to be in Russia, you’re likely to see Russian censorship as your traffic leaves Tor and enters the clearnet. If your traffic never reaches the clearnet, government censorship is unable to view and censor that traffic. It’s also very hard for governments monitoring darknets to reach out and shut down sites that are hosted in their jurisdiction – because they don’t know which sites are in their jurisdiction.

Increasingly, legitimate sites have started to offer hidden service mirrors or proxies, allowing Tor users to browse their content without leaving the network. Facebook, ironically, was one of the first major sites to offer this, targeting users in jurisdictions where network tampering is common. The popular search engine DuckDuckGo is another example.

## Designed for criminals, or just coincidentally useful?

Of course, there are some criminal users of these networks – just as there are criminal users of the internet, and criminal users of the postal service, and criminal users of road networks. But was Tor made for criminal purposes?

Short answer, no. The long answer is still no – Tor was originally developed by the United States Naval Research Laboratory, and development has been subsequently funded by a multitude of sources, mostly related to human rights and civil liberty movements, including the US State Department’s human rights arm. Broadcasters increasingly fund Tor’s development as they try and find new ways to reach markets traditionally covered by border-spanning shortwave broadcasts. You can read up on Tor’s sponsors here.

The point is, Tor and other networks like I2P and Freenet were never designed with criminals in mind, but rather with strong anonymity and privacy in mind. These properties are technical, and define how the tool is designed and developed. These properties are vital for the primary users of these tools, and are intrinsically all-or-nothing.

This is an important point, and one that crops up again and again in both discussions of Tor and when discussing things like government interception of encryption, or “banning” encryption unless it’s possible for the government to subvert it “in extremis“, as has been called for numerous times by the UK government, to give one example.

On a technical level, and a very fundamental one at that, one cannot make a tool that is simultaneously resistant to government censorship and traffic manipulation/interception and also permits lawful intercept by law enforcement authorities, because these networks span borders, and one person’s lawful intercept is another person’s repressive government. There is a lot of technical literature out there on why this is an exceptionally hard problem and practically infeasible, so I won’t go  into detail on this. However, key escrow (the widely accepted “best” approach – though still highly problematic) has been attempted in the past by the NSA and the Clipper chip – and it failed spectacularly.

These properties of anonymity and security also make the services attractive to certain types of criminals, of course, but in recent reports such as this one from RAND on DRL (US State Dept) funded Tor development, the general conclusion is that Tor doesn’t help criminals that much, because there’s better tools out there for criminal use than Tor:

There is little reported evidence that the Internet freedom tools funded by DRL [ie: Tor] assist illicit activities in a material way, vis-à-vis tools that predated or were developed without DRL funding…

… given the wealth and diversity of other privacy, security, and social media tools and technologies, there exist numerous alternatives that would likely be more suitable for criminal activity, either because of reduced surveillance and law enforcement capabilities, fewer restrictions on their availability, or because they are custom built by criminals to suit their own needs – RAND Corporation report

Law enforcement efforts to shut down darknet sites like Silk Road (and its many impersonators – there are by some estimates now several hundred sites like it that sprung up in the aftermath of its shutdown) tend to focus on technical vulnerabilities in the hidden service itself – effectively breaking into the service and forcing it to provide more information that can be used to identify it. Historically, however, most darknet site takedowns have been social engineering victories – where the people running a site are attacked, rather than the site itself.

## Resources

I hope the above is useful for journalists and others trying to get a basic understanding of these tools beyond using scary terms like “the dark web” in reports without really knowing what that means. If you want to find out more then the links below are a good starting point.

## April 03, 2015

### PipeManMusic

#### Do Something!

I'm a very opinionated person and I don't think that there is much I can do about that. Most of the time I try to not force my personal opinions on people, some of my friends and family might disagree, but I do honestly try. I like to think most people arrive at their opinions honestly and they represent a perspective, however different than mine, that is informed by things I might not be able to understand. I do know that my opinions on things have changed or maybe even evolved with time and I'd like to think we are all on a path headed towards our dreams. Maybe at different points on the path but still on a path. If I can help someone down the path with me, I try to do it. What I won't do is push someone to make ground on something by force.

In my own head I don't think I have a single personal philosophy that guides my life. Most of the time I feel like I'm drowning in my own self doubt. However, I do get put into the position of offering advice on peoples lives more than I'm comfortable with. Most of the time I just try my best to nudge people in a positive direction.

Lately however, I've been giving more and more thought to what I would call my personal brand of guiding wisdom. Now I obviously don't have the answer to eternal happiness, world peace or even how to not annoy the crap out of everyone by accident. The reality is, I'm pretty useless at making other peoples lives better most of the time, despite my grand ideas for changing the world.

What I do know is that when I'm at my most depressed or discouraged that I can always dig myself out. Even if it feels at the time like I never will. I don't have a magic silver bullet but I do know that every day I can chose to do at least one thing that makes my life or the life of those around me better and I think that mostly sums up my approach. As I've thought about it, I've boiled it down to something fairly concise.

"Do Something"

What I mean by that is you might not be able to control everything that happens to you and you also might not be able to control the way you feel about it. What you can do is move yourself down the path. Sometimes it's a moon surface leap and sometimes it's crawling through glass, but progress is progress. No, this won't guarantee your bills will get paid, you will save your marriage or heal a childhood pain. It might not even make you feel better. What it will do is put you a little closer, bit by bit.

If you are like me, most things feel overwhelming. I can be pretty hard on myself. I once told someone, "You can't say anything to me more hurtful than what I've said to myself." I think it might be one of the most honest things I've ever said. What I have found though that helps me more than anything, is doing something. Anything. As long as it's a positive step in the right direction. Even if it's just one small step with a million more to go, it's one step closer to my final destination.

No matter how small the gesture it can at least help you get into a better head space. It could be something for yourself, like getting chores you've been avoiding knocked out or something huge like finally telling someone how you care about them. You don't even have to do it for yourself. Sometimes when I'm at my lowest it helps to think about the things I wish others where doing for me at that moment and do it for someone else. One example is, for my own narcissistic reasons, I really like things I post to social media to get liked by my friends and family. Sometimes a post that I feel really strongly about or connected to will get almost completely ignored and it will send me into a tailspin of self doubt. In all likely hood there are multitudes of reasons people didn't take the time to click "like", and most are probably not related to me or my personal feelings. So, even in this silliest of first world problem situations, I try to reach out to others, click like on things my friends post or leave a positive comment. I would never do this disingenuously. I'm always clicking like or give a positive comment to something I actually like. I'm just trying to go a little more out of the way to make someone else feel good.

Now, does this achieve anything measurable. Most of the time no. Most of my friends are likely unaware I do this. Does it suddenly make all my neurotic obsession over whether people like me go away? not at all. What it does though is put me at least half a step closer to feeling better and more often than not it's enough to give me a clear head to see the next step I need to take. Sometimes that next step is one of those moon surface leaps that I can't believe I didn't take before.

Don't get me wrong, I don't hinge my day to day feelings on these silly little acts. Mostly I've learned about myself that I really like the feeling of creating something so I try to focus on those kinds of activities. I have loads of hobbies and things that I do that keep me moving forward. I think those count too. What I try not to do is sit around and think of all the things I should be doing and know for sure I won't do. I'd rather focus on the things I can do than the things I can't.

So now I think I can feel a tiny bit more comfortable in offering someone advice. Just "Do Something." As long as it's positive progress, it's worth it. No matter your situation, you can at least do something to make it better. No matter how insignificant it might seem at the time. I even keep a small daily journal where I try to write down the positive things I did that day. I also write some of the negatives but as long as there is at least one positive, it helps.

So?!?!

Do Something!

That's the best I've got.

## February 21, 2015

### Objective Wave

#### blablack

The version 1.1.0 of ams-lv2 is now available:

The two main “features” of this version are:

• Ported tons of additional plugins from AMS (VCEnv & VCEnv II, Multiphase LFO, VC Organ, etc.)
• Ported the changes from version AMS 2.1.1 (Bit Grinder, Hysteresis, the bug fixes, etc.)

In addition, this release sees a lot of bug fixes or optimization.

As a reminder, ams-lv2 are plugins to create modular synthesizers ported from Alsa Modular Synth.

Here is a demo of an older version of these plugins.

## January 10, 2015

### Objective Wave

#### progress on ams-lv2

Lots of work being done on ams-lv2 these days.

ams-lv2 code is being synced with the latest changes in  ams 2.1.1. I am adding the new plugins FFT Vocoder, Analog Memory, Bit Grinder, etc. and syncing the bug fixes and improvements on the existing plugins like the pulsetrain noise type in the noise2 plugin, the frequency update in LFO.

In addition, I am porting additional plugins from ams I didn’t port in the first phase: VC Organ, Dynamic Waves, Sequencers, etc.

I am hoping for a release of ams-lv2 1.1.0 very soon, but feel free to test the code on github https://github.com/blablack/ams-lv2

# ICSC 2015

### 2-4 OCTOBER 2015, ST.PETERSBURG, RUSSIA

#### Join us for The 3 rd International Csound Conference! Three days of concerts, papers, workshops and meetings. Get in touch with the global community of composers, sound designers and programmers.

The Conference Will Be Held At

## The Bonch-Bruevich St.Petersburg State University of Telecommunications

The region biggest training and scientific complex specialized in information technologies and communications

## Conference Chair

Gleb Rogozinsky

Tags:

You might be interested in this:

Copyright © cSounds.com [2015 Csound Conference in St. Petersburg, Russia], All Right Reserved. 2018.

## December 03, 2014

### cSounds.com

#### CsoundQt 0.9.0 Released

This version includes:

• A new virtual MIDI keyboard
• Visual display of matching (or un-matching) parenthesis
• Correct highlighting of type marks for functional opcodes (e.g. oscil:a)
• Put back status bar
• Added template list in file menu. Template directory can be set from the environment options
• Added home and opcode buttons to help dock widget
• Removed dependency on libsndfile (now using Csound’s perfThread record facilities
• Fixed tab behavior
• Updated version of Stria Synth (thanks to Emilio Giordani)
• Dock help now searches as user types (not only when return is pressed)

Tags:

You might be interested in this:

## November 02, 2014

### Midichlorians in the blood

#### Drumstick Metronome (kmetronome 1.0.0) and Drumstick 1.0.0 Libraries in the Whole Picture

I've released in the past weeks some things labeled "Drumstick" and also labeled "1.0.0". What is all this about?

Drumstick is the name of a set of Qt based libraries for MIDI processing. Current major version of the Qt Frameworks is 5, which are binary incompatible with the older Qt4 libraries. Latest Qt4 based drumstick release was 0.5.0 published in 2010. Newest Qt5 based release is 1.0.0, published on August 30 2014.

Drumstick 1.0.0 is not binary compatible with the older one, nor even fully source compatible. In addition, it contains a new "drumstick-rt" library which is a cross-platform MIDI input-output abstraction. Based on Drumstick 1.0.0 I've released two more applications: vmpk 0.6.0 and kmetronome 1.0.0 (now renamed as "Drumstick Metronome").

There are other applications based on the old drumstick 0.5.0 libraries out there: kmid2 and kmidimon. I'm no longer the kmid2 maintainer, but I will release (time permitting) a "Drumstick Karaoke" application replacing kmid2, and of course also a new kmidimon (naming it as Drumstick-Whatever). Meanwhile, Linux distributions may have a problem here shipping the old and new programs together. Not a big problem, though, because the runtime libraries are intended to co-exist together on the same system. The runtime dependencies are:
• vmpk-0.6.0 and kmetronome-1.0.0 depend on drumstick-1.0.0
• kmidimon-0.7.5 and kmid2-2.4.0 depend on drumstick-0.5.0
If you want to distribute all kmidimon, kmid2, vmpk and kmetronome latest releases for the same system, you need to distribute also two sets of drumstick runtime libraries. This is possible because the old and new  drumstick libraries have a different SONAME. What is needed is to also rename the packages accordingly.

$objdump -p /usr/lib64/libdrumstick-alsa.so.0.5.0 | grep SONAME SONAME libdrumstick-alsa.so.0$ objdump -p /usr/local/lib64/libdrumstick-alsa.so.1.0.0 | grep SONAME
SONAME               libdrumstick-alsa.so.1

For instance, you may name your old drumstick package as "drumstick0" and the new one "drumstick1", or append the Qt name like in "drumstick-qt4" and "drumstick-qt5", or keep the old one as plain "drumstick" and rename only  the new one. Whatever makes you happier. These suggestions are for people packaging drumstick for Linux distributions. If you are compiling drumstick yourself and installing from sources, then you don't need to worry. You can use the same prefix (usually /usr/local/) without conflicts, except only one set of headers (usually the latest) can be available at the same time in your system. This also applies to the "-devel" packages from distributions.

There is only one thing left now. The whole picture :-)

## October 12, 2014

### harryhaaren

#### Blog Status : Moved, not dead!

Hi all,

Once upon a time I posted on this blog (somewhat) regularly. Currently I dont. Why? I'm running OpenAV Productions, and that is where the updates are!

If you're still interested in Linux audio, C++ programming or software in general, checkout the site:
www.openavproductions.com

Developers may have particular interest in the developer topics like implementing NSM or dealing with memory in real-time.

Audio programming folks, checkout some articles that I've written on the topics of real-time programming, memory management, implementing NSM and more:
http://openavproductions.com/conferences.

I probably won't post here for another long time, so bye for now! -Harry

## June 29, 2014

### harryhaaren

#### LV2 and Atom communication

EDIT: There are now better resources to learn LV2 Atom programming: please use them!
www.lv2plug.in/book
http://lac.linuxaudio.org/2014/video.php?id=24
/EDIT

Situation: You're trying to write a synth or effect, and you need to communicate between your UI and the DSP parts of the plugin, and MIDI doesn't cut it: enter Atom events. I found them difficult to get to grips with, and hope that this guide eases the process of using them to achieve communication.

## Starting out

It is the official documentation on the Atom spec. Just read the
description. It gives a good general overview of these things called Atoms.

This is "message passing": we send an Atom event from the UI to the DSP part of the plugin. This message needs to be safe to use in a real-time context.

(Note it is assumed that the concepts of URIDs is familiar to you. If they're not, go back and read this article: http://harryhaaren.blogspot.ie/2012/06/writing-lv2-plugins-lv2-overview.html )

Step 1: Set up an LV2_Atom_Forge. The lv2_atom_forge_*   functions are how you build these events.

LV2_Atom_Forge forge;
lv2_atom_forge_init( &forge, map ); // map = LV2_URID_Map feature

### Atoms

Atoms are "plain old data" or POD. They're a sequence of bytes written in a contiguous part of memory. Moving them around is possible with a single memcpy() call.

## Writing Atoms

### Understanding the URID naming convention

// we need URID's to represent functionality: There's a naming scheme here, and its *essential* to understand it. Say the functionality we want to represent is a name of a Cat (similar to the official atom example). Here eg_Cat represents the "noun" or "item" we are sending an Atom about. eg_name represents something about the eg_Cat.

something_Something represents an noun or item, while something_something (note the missing capital letter) is represents an aspect of the noun.

LV2_URID eg_Cat;
LV2_URID eg_name;

In short classes and types are Capitalized, and nothing else is.

### Code to write messages

// A frame is essentially a "holder" for data. So we put our event into a LV2_Atom_Forge_Frame. These frames allow the "appending" or adding in of data.
LV2_Atom_Forge_Frame frame;

// Here we write a "blank" atom, which contains nothing (yet). We're going to fill that blank in with some data in a minute. A blank is a dictionary of key:value pairs. The property_head is the key, and the value comes after that.
Note that the last parameter to this function represents the noun or type of item the Atom is about.
LV2_Atom* msg = (LV2_Atom*)lv2_atom_forge_blank(
&forge, &frame, 1, uris.eg_Cat );

// then we write a "property_head": this uses a URID to describe the next bit of data coming up, which will form the value of the key:value dictionary pair.

// now we write the data, note the call to forge_string(), we're writing string data here! There's a forge_int() forge_float() etc too!
lv2_atom_forge_string(&forge, "nameOfCat", strlen("nameOfCat") );

// Popping the frame is like a closing } of a function. Its a finished event, there's nothing more to write into it.

lv2_atom_forge_pop( &forge, &frame);

### From the UI

// To write messages, we set up a buffer:
uint8_t obj_buf[1024];

// Then we tell the forge to use that buffer

lv2_atom_forge_set_buffer(&forge, obj_buf, 1024);

// now check the "Code to write messages" heading above, that code goes here, where you write the event.

// We have a write_function (from the instantiate() call) and a controller. These are used to send Atoms back. Note that the type of event is atom_eventTransfer: This means the host should pass it directly the the input port of the plugin, and not interpret it. write_function(controller, CONTROL_PORT_NUMBER,
lv2_atom_total_size(msg),
uris.atom_eventTransfer, msg);

### From the DSP

// Set up forge to write directly to notify output port. This means that when we create an Atom in the DSP part, we don't allocate memory, we write the Atom directly into the notify port.

const uint32_t notify_capacity = self->notify_port->atom.size;
lv2_atom_forge_set_buffer(&self->forge,
(uint8_t*)self->notify_port,
notify_capacity);

// Start a sequence in the notify output port
&self->notify_frame, 0);

Now look back at the "Code to write messages" section. that's it, write the event into the Notify atom port, and done.

// Read incoming events directly from control_port, the Atom input port
LV2_ATOM_SEQUENCE_FOREACH(self->control_port, ev)
{

// check if the type of the Atom is eg_Cat
if (ev->body.type == self->uris.eg_Cat)

{
// get the object representing the rest of the data
const LV2_Atom_Object* obj = (LV2_Atom_Object*)&ev->body;

// check if the type of the data is eg_name
if ( obj->body.otype == self->uris.eg_name )
{

// get the data from the body
const LV2_Atom_Object* body = NULL;
lv2_atom_object_get(obj, self->uris.
eg_name,
&body, 0);

// convert it to the type it is, and use it
string s = (char*)LV2_ATOM_BODY(body);
cout << "Cat's name property is " << s << endl;
}
}
}

## Conclusion

That's it. Its not hard. It just takes getting used to. Its actually a very powerful and easy way of designing a program / plugin, as it *demands* separation between the threads, which is a really good thing.

Questions or comments, let me know :) -Harry

## April 29, 2014

### Music â€“ woo, tangent

#### ludum dare 29: underground city defender

This weekend was Ludum Dare again, and again Switchbreak asked me to write some music for his entry. It’s called Underground City Defender, and it’s a HTML5/Javascript game, so you can play it in your browser here!

The original idea for the game was to make it Night Vale-themed, so I started the music with a Disparition vibe in mind. The game didn’t turn out that way in the end, but that’s okay, since the music didn’t either! It’s suitably dark and has a driving beat to it, so I think fits the game pretty well.

The Songs of Switchbreak by pneuman

My move to San Francisco is just a few weeks away, so I’ve sold most of my studio gear, including my audio interface and keyboard. That left me using my on-board sound card to run JACK and Ardour, but that turned out just fine — with no hardware synths to record from, not having a proper audio interface didn’t slow me down.

As some of you guessed, the toy in the mystery box in my last post was indeed a Teenage Engineering OP-1. It filled in as my MIDI controller here, and while it’s no substitute for a full-sized, velocity-sensitive keyboard, it did a surprisingly good job.

Software-wise, I used Rui’s samplv1 for the kick and snare drums, which worked brilliantly. I created separate tracks for the kick and snare, and added samplv1 to each, loading appropriate samples and then tweaking samplv1’s filters and envelopes to get the sound I was after. In the past I’ve used Hydrogen and created custom drum kits when I needed to make these sorts of tweaks, but having the same features (and more!) in a plugin within Ardour is definitely more convenient.

The other plugins probably aren’t surprising — Pianoteq for the pianos, Loomer Aspect for everything else — and of course, it was sequenced in Ardour 3. Ardour was a bit crashy for me in this session; I don’t know if it was because of my hasty JACK setup, or some issues in Ardour’s current Git code, but I’ll see if I can narrow it down.

## March 03, 2014

### Linux â€“ woo, tangent

#### studio slimdown

Last weekend, almost exactly five years after I bought my Blofeld synth, I sold it. With plans to move to the US well underway, I’ve been thinking about the things I use often enough to warrant dragging them along with me, and the Blofeld just didn’t make the cut. At first, the Blofeld was the heart of my studio — in fact, if I hadn’t bought the Blofeld, I may well have given up on trying to make music under Linux — but lately, it’s spent a lot more time powered off than powered up.

Why? Well, the music I’m interested in making has changed somewhat — it’s become more sample driven and less about purely synthetic sounds — but the biggest reason is that the tools available on Linux have improved immensely in the last five years.

Bye bye Blofeld — I guess I’ll have to change my Bandcamp bio photo now

Back in 2009, sequencers like Qtractor and Rosegarden had no plugin automation support, and even if they had, there were few synths available as plugins that were worth using. Standalone JACK synths were more widespread, and those could at least be automated (in a fashion) via MIDI CCs, but they were often complicated and had limited CC support. With the Blofeld, I could create high-quality sounds using an intuitive interface, and then control every aspect of those sounds via MIDI.

Today, we have full plugin automation in both Ardour 3 and Qtractor, and we also have many more plugin synths to play with. LV2 has come in to its own for native Linux developers, and native VST support has become more widespread, paving the way for ports of open-source and commercial Windows VSTs. My 2012 RPM Challenge album, far side of the mĂźn has the TAL NoiseMaker VST all over it; if you’re recording today, you also have Sorcer, Fabla, Rogue, the greatly-improved amsynth, Rui’s synthv1/samplv1/drumkv1 triumvirate, and more, alongside commercial plugins like Discovery, Aspect, and the not-quite-so-synthy-but-still-great Pianoteq.

I bought the Blofeld specifically to use it with a DAW, but I think that became its undoing. Hardware synths are great when you can fire them up and start making sounds straight away, but the Blofeld is a desktop module, so before I could play anything I had to open a DAW (or QJackCtl, at the very least) and do some MIDI and audio routing. In the end, it was easier to use a plugin synth than to set up the Blofeld.

You can probably guess what’s in the box, but if not, all will be revealed soon

So, what else might not make the cut? I only use my CS2X as a keyboard, so I’ll sell that and buy a new controller keyboard after moving, and now that VST plugins are widely supported, I can replace my Behringer VM-1 analog delay with a copy of Loomer Resound. I might also downsize my audio interface — I don’t need all the inputs on my Saffire PRO40, and now that Linux supports a bunch of USB 2.0 audio devices, there are several smaller options that’ll work without needing Firewire.

I’m not getting rid of all of my hardware, though; I’ll definitely keep my KORG nanoKONTROL, which is still a great, small MIDI controller. In fact, I also have two new toys that I’ll be writing about very soon. Both are about as different from one another as you could get, but they do share one thing — they’re both standalone devices that let you make music without going anywhere near a computer.

## February 16, 2014

#### 16 Feb 2014

gtk-doc

Just released gtk-doc 1.20 with lots of bugfixes and feature request implemented (32 tickets closed). If you are a developer, fetch it and rebuild your docs - I hope you like the new look!

The biggest feature is the large improvement over the rudimentary markdown support gtk-doc had - thanks to William Jon McCann for the contributions. Take a look at the manual to learn about the new syntax. We chose markdown, so that the syntax looks good in the sources and you can already go ahead using it. If someone builds with an older gtk-doc things might show up as is, but thats not the end of the world.

## January 31, 2014

### Joanillo Blog. Maker, Open source and hardware » linuxaudio

#### diatonicbaixos v1.0: accordion play-along software

Maria is studying the diatonic button accordion (and me to!). One of the things I’ve always found difficult is the coordination between the right hand (with which we play the melody) and left hand (bass buttons). Same happens with the piano, which I always found difficult to play both hands at the same time.

Here is a small program for helping out to put your left fingers on the bass buttons. While playing back the melody (we need the midi file of the tune), in the screen we can see the positions that we are going to press (in the diatonic accordion we have 8 buttons for bass/chords that sound different if you open or close the bellow). When a button is shown in green means you have to close the bellow; and red it must be opened.

For the graphic interface we used ncurses library, which is the first time programming with it (and I’m very happy with the result). The program is a JACK client (the audio server for Linux, www.jackaudio.org ), and is the time master indeed, so it controls the transport. To use the application it is best to launch the bash script provided, diatonicbaixos.sh, where you can see the tool chain of the utilities used.

First of all the desired tempo of the midi file is changed by change_tempo_smf script (source code provided). Then the JACK server starts, and so fluidsynth, the synth. Later jack-smf-player is launched for playing-back the midi file. and then klick, a JACK client metronome. Finally is time to launch my application, diatonicbaixos. The result, as you can see in the video, is that while tune plays-along you can see the right left finger positions. You can change the tempo, you can enable/disable the metronome, and there is another parameter that displays the following button you have to press .

The best is to see the video:

## December 21, 2013

#### 21 Dec 2013

sndfile plugin for gstreamer 1.X

A merry christmas to everybody. As a little present for musicians, I started a rewrite of the sndfile plugin that had not been ported from gstreamer-0.10. The new version will not be consist of sfsrc and sfsink, but instead of sfdec and sfenc. For now sfdec exists, it can read a bunch of formats that we did not have support in gstreamer so far - xi, sds, 8svx/16sv, w64, ... and it works with playbin and co. I'll need to collect more test files to test all features. I'll also write the encoder/muxing part next.

libsnd file has good support for audio files that are used as instruments. There are quite a ew things that I can't map to gstreamer yet:

• base note - could be a tag

• loops - could be a special toc edition, but that sounds like a little abuse, some form of edit-list support would probably be better

• envelopes (e.g. a volume envelope)

## December 07, 2013

### Joanillo Blog. Maker, Open source and hardware » linuxaudio

#### 50 ways… #6. Score Editors: Lilypond

In the previous post we have created a soundfont for de diatonic button accordion, castagnari.sf2, and with their sounds we played our prefered song: Una Plata d’Enciam, within the project 50 ways to play Una Plata d’Enciam. Let’s now see how lilypond score editor can create a music sheet that respond to the challenges posed by the notation for the accordion. What we will do in this post is not anything special. In the friend blog Scores of Beauty, http://lilypondblog.org/ you can see really complex notations made with lilypond.

Basically we want to correctly display the melody, chords, and how left-hand bass/chords (accompaniment) hve been playing. The idea is to create didactic music sheets that help my daughter to put her fingers in the left keyboard.

The first thing is to create the lead part, above the notes we put the chords as usual, and we will show when is it necessary to play a bass (b) or a chord (a) with the left hand. We will do the last with \lyrics, as we do with the lyrics of a song.

\version “2.12.1″
title = “Una Plata d’Enciam”
}

baixosnotacio = \chordmode {
c4 c f c f g g c c c f c f g g c
c4 c f c f g g c c c f c f g g c
}

partitura = \relative c’
<<
\new ChordNames {
\set chordChanges = ##t
\baixosnotacio
}

\new Staff = "veu 1"
{
\clef treble
\key c \major
\time 2/4
\tempo 4=80

e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f f f g e4
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f e f g e4
}
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
}

>>

\score {
\partitura
\layout { }
\midi { }
}

Now is time to write the bass part. If we write the bass/chord notes as usual in lilypond, it works, but the ouput is not usefull for reading the music. In fact, this music sheet is not correct. Since it was explained in the previous post, association between chords CM, DM,… and midi notes 24, 26,… was a convenant decision. This incorrect music sheet:

\version “2.12.1″
title = “Una Plata d’Enciam”
}

baixosnotacio = \chordmode {
c4 c f c f g g c c c f c f g g c
c4 c f c f g g c c c f c f g g c
}

partitura =
<<
\new ChordNames {
\set chordChanges = ##t
\baixosnotacio
}

\relative c'
\new Staff = "veu 1"
{
\clef treble
\key c \major
\time 2/4
\tempo 4=80
%notes amb valors relatius
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f f f g e4
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f e f g e4
}
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
}

\new Staff = "baixosnotes"
{
\key c \major
%notes amb valors absoluts
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f'8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
}
>>

\score {
\partitura
\layout { }
\midi { }
}

Now is important to note that we used relative notation for the lead part, and absolute notation for the accompaniment part, that simplifies alot editing the bass line, that normally alternates bass buttons with chord buttons. This music sheet is not correct for reading or printing, but the output midi file is perfect for listening. Is there a way to fix it? Definitely yes. We define a \score for printing, and another \score for the midi part. The result: the music score is perfect for printing (there are no bass/chord notes, just the position to put the fingers); and the midi file generated is also correct for playing:

\version “2.12.1″
title = “Una Plata d’Enciam”
}

baixosnotacio = \chordmode {
c4 c f c f g g c c c f c f g g c
c4 c f c f g g c c c f c f g g c
}

partitura = \relative c’
<<
\new ChordNames {
\set chordChanges = ##t
\baixosnotacio
}

\new Staff = "veu 1"
{
\clef treble
\key c \major
\time 2/4
\tempo 4=80
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f f f g e4
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f e f g e4
}
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
b _ a _ b _ a b _ _ a _ b _ _ a _ b _ a _
}
>>

fitxermidi =
<<
\relative c'
\new Staff = "veu 1"
{
\clef treble
\key c \major
\time 2/4
\tempo 4=80
%notes amb valors relatius
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f f f g e4
e8 f g g a a g4 \times 2/3 { a8 a g } f f \times 2/3 { f f f } g e e f g g a a g4 \times 2/3 { a8 a g } f e f g e4
}

\new Staff = "baixosnotes"
{
\key c \major
%notes amb valors absoluts
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f'8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
c,8. r16 c,,8 r f,8. r16 c,,8 r f,8. r16 g,,8 r g,8. r16 c,,8 r
}
>>

\score {
\partitura
\layout { }
}

\score {
\fitxermidi
\midi { }
}

Finally we can listen our song Una Plata d’Enciam in the usual way that is played with a button accordion:

References:

## October 14, 2013

### Robin Gareus

#### meters.lv2 - advanced audio level meters

Coming late to my own party..

In the wake of overhauling Ardour's bar-graph meters, I've taken the time to implement corresponding needle-style meters and more.

meters.lv2 features 10 needle-style meters: Mono and stereo variants of DIN, Nordic, EBU, BBC and VU types (which are based on jmeters by Fons Adriaensen).

Additionally it includes a Stereo Phase Correlation Meter, an EBU-R128 Meter with Histogram and History display, a Digital True-Peak Meter, a Stereo Phase Scope (Goniometer), a 31 Band Spectrum-Analyzer and six K-meters (mono,stereo versions of the K12, K14 and K20 K meter-standard by Bob Katz).

Source-code, information and screenshots can be found at https://github.com/x42/meters.lv2

Alexandre Prokoudine has written a nice blog-post over at LGW and gave away a hard-copy of the “Audio Metering. Measurements, Standards, Practice” book by Eddy Brixen to celebrate the release.

meters.lv2 would not be what it is without the various contributions from Fons Adriaensen, David Robillard, Chris Goddard and Axel Müller. Thanks to Jaromír Mikeš they are already packaged in debian, part of the x42-plugins package.

## September 26, 2013

### audio

#### Pitch Perfect Penguins

My daughters love the movie Pitch Perfect. I suspect our XBMC has played it more than 100 times, and I'm not exaggerating. Whether or not you enjoy young-adult movies about singing competitions and cartoon-like projectile vomiting, I'll admit it's a pretty fun movie. more>>

## August 09, 2013

### audio

#### Songbird Becomes...Nightingale!

Several years back, Songbird was going to be the newest, coolest, most-awesome music player ever to grace the Linux desktop. Then things happened, as they often do, and Linux support for Songbird was discontinued. more>>

## June 15, 2013

### Robin Gareus

#### Ardour 3.2

Ardour 3.2 has been released featuring the videotimeline and a lot of other things that I've been working on on the past years, months and weeks!

I count myself lucky that after all that work, I don't need to write the announcement by myself. Let the buzz begin.

There's…

Many thanks to everyone who contributed, provided feedback and support. In particular Chris Goddard, Thomas Vecchione and Paul Davis. Not to mention the projects on who made this possible in the first place – most notably ffmpeg.

The source-code is available out there and if you don't want to get the official ready-to-run application, various multimedia GNU/Linux distributions have already picked it up.

## April 15, 2013

### Musings on maintaining Ubuntu

#### Fixing build failures for 13.04

One of the pleasures (and privileges) of Free Software is transparency in consistency, here meaning it's fairly painless to see where stuff fails to build in Ubuntu and to be able to help in fixing those failures.

Let's get 13.04 into shape!

## February 22, 2013

### Joe Button's blog

#### Sampler plugin for the baremetal LV2 host

I threw together a simpler sampler plugin for kicks. Like the other plugins it sounds fairly underwhelming. Next challenge will probably be to try plugging in some real LV2 plugins.

## February 21, 2013

### Joe Button's blog

#### Baremetal MIDI machine now talks to hardware MIDI devices

The Baremetal MIDI file player was cool, but not quite as cool as a real instrument.

I wired up a MIDI In port along the lines of This one here, messed with the code a bit and voila (and potentially viola), I can play LV2 instrument plugins using a MIDI keyboard:

When I say "LV2 synth plugins", I should clarify that I'm only using the LV2 plugin C API, not the whole .ttl text file shebangle. I hope to get around to that at some point but it will be a while before you can directly plug LV2s into this and expect them to just work.

## December 19, 2012

### Nedko Arnaudov on Linux Audio

#### no-self-connect patch for jack-1.9.9.5

People are asking for no-self-connect patch updated for the new 1.9.9.5 jack2 release. While one can always use the git no-self-connect branch to build jack or generate patch, not everybody knows how to use git. So I’ve made a p1 patch and uploaded it.

The patch itself and a PGP signature for it:

or if you want to use HTTPS and have ipv6: