December 13, 2017

Linux – CDM Create Digital Music

Try a new physical model of a pipe organ for free

Now, all your realistic pipe organ dreams are about to be solved in software – without samples.

MODARTT are the French firm behind the terrific Pianoteq physically modeled instrument, which covers various classic keys and acoustic pianos. That mathematical model is good enough as to find applications in teaching and training.

Now, they’re turning their attentions to the pipe organ – some of which turns out to be surprisingly hard to model.

For now, we get just a four-octave preview of the organ flue pipe. But that’s free, and fun to play with – and it sounds amazing enough that I spent some part of the afternoon just listening to the demos. (Pair this with a convolution reverb of a church and I think you could be really happy.)

The standalone version is free, and like all their software runs on Linux as well as Mac and Windows. Stay tuned for the full version. Description:

ORGANTEQ Alpha is a new generation physically modeled pipe organ that reproduces the complex behaviour of the organ flue pipe.
It is a small organ with a keyboard range of 4 octaves (from F1 to F5) and with 2 stops: a Flute 8′ and a Flute 4′ (octave).
It is provided in standalone mode only and should be regarded as a foretaste of a more advanced commercial version in development, due to be released during 2018.

The post Try a new physical model of a pipe organ for free appeared first on CDM Create Digital Music.

by Peter Kirn at December 13, 2017 04:01 PM

December 12, 2017

QSampler 0.5.0, liblscp 0.6.0 - An(other) Autumn'17 Release


On the tail but still fresh LinuxSampler 2.1.0 release...

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.5.0, liblscp 0.6.0 (autumn'17) released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Project page:

Git repos:


  • French (fr) translation added by Olivier Humbert (qsampler_fr.ts).
  • Desktop entry specification file is now finally independent from build/configure template chains.
  • Updated target path for's AppStream metainfo file (formerly AppData).


Qsampler is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.


Enjoy && keep the fun!

by rncbc at December 12, 2017 07:00 PM

December 11, 2017

GStreamer News

GStreamer 1.12.4 stable release (binaries)

Pre-built binary images of the 1.12.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

December 11, 2017 12:00 AM

December 10, 2017

Libre Music Production - Articles, Tutorials and News


Raspberry Pi

Block 4 is pioneering the usage of Raspberry Pis in artistic context since 2013 so we decided to put out more documentation about it inclusive software and schematics. These pages shows how to get 8 channels of analog data in the easy way, with the MCP 3208 a/d converter IC and even provide an external for Pure Data to access it in your patches:

We use it for our project TMS to create effect processors with the Raspberry Pi which can be controlled by sensors with a higher resolution then Midi. The first concert with our custom device have been in London which can be seen some posts below...

by herrsteiner ( at December 10, 2017 12:44 PM

December 07, 2017

Linux – CDM Create Digital Music

A guide to VCV Rack, a software Eurorack modular you can use for free

In a few short weeks since it was released, VCV Rack has transformed how you might start with modular – by making it run in software, for free or cheap.

VCV Rack now lets you run an entire simulated Eurorack on your computer – or interface with hardware modular. And you can get started without spending a cent, with add-on modules available by the day for free or inexpensively. Ted Pallas has been working with VCV since the beginning, and gives us a complete hands-on guide.

There’s always a reason people fall in love with modular music set-ups. For some, it’s having a consistent, tactile interface. For others, it’s about the way open-ended architectures let the user, rather than a manufacturer, determine the system’s limits. For me, the main attraction to modulars is access to tools that can run free from a rigid musical timeline, but still play a sequence. It means they let me dial in interesting poly-rhythmic parts without stress.

An example: I hooked a Mutable Instruments Braids up to a Veils modular, triggered their VCA with an LFO, and ran the resulting pulse through a Befaco Spring Reverb. I used this patch to thicken the stew on a very minimal DJ mix. I also had a simple LFO pointed at a solenoid attached to a small spring reverb tank boinging away in a channel on the master mixer.

This is all pretty standard Eurorack deployment, except for one tiny detail – all of the modules exist in software, contained inside a cross-platform app called VCV Rack.

VCV Rack is an open-source Eurorack emulation environment. Developer Andrew Belt has built a system to simulate interactions between 0-5 volt signals and various circuits. He’s paired this system with a UI that mimics conventions of Eurorack use. Third-party developers are armed with an API and a strong community.

VCV Rack is open-source, and the core software is free to download and use. The VCV Rack website also features several sets of modules as expansions, many of which are free. The most notable cost-free VCV offering is a near complete set of Mutable Instruments modules, under the name Audible. Beyond the modules distributed by developer Andrew Belt, there’s an ecosystem of several dozen developers, all working on building and supporting their own sets of tools – the vast majority of these are free as well, as of the time of this writing.

The result is a wide array of tools, covering both real-world modules (including the notable recent addition of the Turing Machine and a full collection of Audible Instruments emulations) and original circuits made just for Rack. The software runs in Windows, Mac OS and Linux, though the system doesn’t force third-party developers to support all three platforms.

VCV Rack is a young project, with its first public build only having become available September 10th. I became a user the same day, and have been using it several times a week for several months. I don’t usually take to new software so quickly, but in Rack’s case I found myself opening the app first and only moving on to a DAW after I had a good thing going. What continues to keep me engaged is the software’s usability – drop modules into a Rack, connect them with cables, and the patch does what it’s patched to do. Integration with a larger system is simple – I use a MOTU 828 mk2 to send and receive audio and CV through and audio interface module, and MIDI interfacing is handled in a similar fashion through a MIDI module. I can choose to clock the system to my midiclock+, or I can let it run free.

VCV Rack runs great on my late 2014 MacBook Pro – I’ve heard crackling audio just a handful of times, and in those cases only because I was doing dumb things with shared sound cards. To a lesser degree, VCV Rack also runs well on a Microsoft Surface Pro 3, though using the interface via touch input on the Surface is fiddly at best. Knobs tend to run all the way up or all the way down at the slightest nudge, and the hitbox for patch cable insert points is a bit small for your fingers on any touch screens smaller than 15”. Using a stylus is more comfortable.

Stability is impressive overall, even at this early pre-1.0 development stage. Crashes are exceptionally rare, at least on my systems – I can’t specifically remember the last one, though there’s been a few times the aforementioned crackles forced me to restart Rack. Restarting Rack is no big deal, though – on relaunch, it restores the last state of your patch with audio running, and more than likely everything is ok. Rack will mute lines causing feedback loops, a restriction which ultimately serves to keep your ears and your gear safe.

As part of my field work for this write-up, I decided to run a survey. The VCV Rack community is more approachable, open, and down to get dirty with problem-solving than any other software community I’ve participated in directly. I figured I’d get a handful of responses, with variations of “it’s Eurorack but on my computer and for free” as the most common response.

Instead, I got a peek inside a community excited about the product bringing them all together. Over a third of the respondents have been using VCV since early September, and a quarter of the respondents have only been using the tool for a few weeks. Across the board, though, there’s a few key points I think deserve a highlight.

“Modular is for everybody”, and VCV Rack is modular for everybody.

Almost every single one of our 62 respondents in some way indicated that they love hardware modular for its creative possibilities, but also see cost as a barrier. VCV Rack gets right around the cost issue by being free upfront, with some more exotic modules costing money to access. There’s also a solid chunk of users coming from a university experience with large modular systems, such as Montreal’s SYSTMS, who say what initially appealed to them was “getting to explore modular, whereas before that was just not available to a low income musician. I had been introduced to Doepfer systems in university, and since then I have of course not had access to any very expensive physical Eurorack set ups. Also the idea of introducing and teaching my friends, who I knew would be into this!”

(While Rack is especially hardware-like, I do want to shout out fellow open-source modular solution Automatonism – you won’t find anything like a complete set of Mutable modules, but you will find a healthy Pd-driven open source modular synth with the ability to easily execute away from a computer via the Critter and Guitari Organelle.)

VCV Rack can be used in as many ways as a real Eurorack system.

The Rack Github describes Rack as an “Open-source virtual Eurorack DAW,” and while I wouldn’t use it to edit audio, Rack can handle a wide enough set of roles in a larger system to fairly call the software a workstation. There are several options for recording audio provided by the community, with an equal number of ways to mix and otherwise manipulate sets of signals. It’s possible to create stems of audio data and control data. It’s possible to get multiple channels of audio into another piece of software for further editing, directly via virtual soundcards.

VCV Rack also has a home within hardware modular systems, with users engineering soundcard-driven solutions for getting CV and audio in and out of a modular rack running alongside VCV. User Chris Beckstrom describes a typical broad array of uses: “standalone to make cool sounds (sampling for later), using Tidal Cycles (algorithmic sequencer) to trigger midi, using other midi sources like Bitwig to trigger Rack, and also sending and receiving audio to and from my diy modular.”

8th graders can make M-nus-grade techno with it.

I mean, check it out.

If you build it, they will come.

For having been around only since early September VCV Rack already has a very healthy ecosystem of third-party modules. Devs universally describe Rack’s source as especially easy to work with – Jeremy Wentworth, maker of the JW-modules series, says “[Andrew Belt’s] code for rack is so easy to follow. There is even a tutorial module. I looked at that and said, hey, maybe I can actually build a module, and then I did.” Jeremy is joined by over 40 other plug-in developers, most of whom are managing to find their own Eurorack recipe. VCV Rack also has a very active Facebook community, with over 100 posts appearing over the three days this article was written in. I’ve been on the Internet for a long time – it’s unusual to find something this cohesive, cool-headed and capable outside of a forum.

The community aren’t just freeloaders.

Almost two thirds of our respondents have already purchased some Rack modules, or are going to be purchasing some soon. Only a handful plan not to purchase any modules. There’s a market here, a path to the market via VCV Rack, and a group of developers already working to keep people interested and engaged with both new modules and recreations of real-world Eurorack hardware. Two thirds of respondents is a big number – if you’re a DSP-savvy developer it’s worth investigating VCV Rack.

DSP is portable.

The portability of signal processing algorithms isn’t a phenomenon unique to VCV Rack, but it is my opinion, VCV Rack will be uniquely well-served by the ability to easily port DSP code and concepts from other plaforms. Michael Hetrick’s beloved Euro Reakt Blocks are being partially ported from Reaktor Core patches into VCV Rack, for example, and Martin Lueder has ported over Stanford’s FreeVerb as part of his plugin pack. As the community cements itself, we’ll likely only see more and more beloved bits of code find their way into VCV Rack.

A handful of cool, recent VCV developments

VCV Rack are selling commercial modules. Pulse 8 and Pulse 16 are drum-style Sequencers, and there’s also an 8-channel mixer with built-in VCA level CV inputs. You’ll find them on the official VCV Rack site. Instead of donations, Andrew prefers people purchase his modules, or buy the modules of other devs. All the modules are highly usable, with logical front-panel layouts and powerful CV control. Ed.: This in turn is encouraging, as it suggests a business model pathway for the developers of this unexpected runaway (initially) free hit. -PK

An open Music Thing module has come to VCV. The Turing Machine mkII by Music Thing Modular released by Stellare Modular – A classic looping random CV generator, typically used for lead melodies or basslines, sees a port into VCV Rack by a third-party dev. Open source hardware is being modeled and deployed in an open source environment.

There’s now Ableton Link support. A module supporting Ableton Link, the live jamming / wireless sync protocol for desktop and mobile software, is available via a module released by Stellare. In addition to letting you join in with any software supporting Link, there’s a very handy clock offset.

Reaktor to VCV. Michael Hetrick is porting over Euro Reakt stuff from Reaktor Blocks, and making new modules in the process. Especially worth pointing out is his Github page, which includes ideas on what to actually do with the modules in the context of a patch:

VCV meets monome. Michael Dewberry’s Monome Modules allow users to connect their monome Grid controllers, or use a virtual monome within Rack itself. He’s currently also got a build of Monome’s White Whale module:

Hora’s upper class tools and drums. Hora Music is to my knowledge the first “premium” price module release, at 40euro for his package of modules. With a combination of sequencers, mixers, and drums, it could be the basis of whole projects. See:

I’ll be back next week with a few different recipes for ways you can make Rack part of your set-up, as well as a Q&A with the developer.

Ted Pallas is a producer and technologist based out of Chicago, Illinois. Find him at

The post A guide to VCV Rack, a software Eurorack modular you can use for free appeared first on CDM Create Digital Music.

by Ted Pallas at December 07, 2017 10:38 PM

GStreamer News

GStreamer 1.12.4 stable release

The GStreamer team is pleased to announce the fourth bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

December 07, 2017 06:30 PM

December 04, 2017

Qtractor 0.8.5 - The Autumn'17 Release


While this Fall still lasts... and before the New Year/Sun cycle comes in and sure will goes out...

And then there is the time when a couple of things that most may find well, badly interesting: one it's primarily and visually evident and adds up as an UI/UX thingy, while the other may land way more beyond the scenes, not so obviously perhaps, but having an impact on the short and not so short but long run. You tell me.

Truth is:

Qtractor 0.8.5 (autumn'17) is now released!

The short list is, or better yet, there are these:

  • File-system browser/tree-view (NEW)
  • Out-of-process/cache plugin scan (ALL plugin types, not just Linux-VST)

And the not so short list but quite the same information (aka. change-log):

  • Audio clip gain and panning properties are now taken into consideration when hash-linking (aka. ref-counting) their back-end buffers.
  • New out-of-process plug-in inventory scan and cache option, replacing the old (aka. dummy) VST plug-in scan option and extending its function to all other plug-in types: LADSPA, DSSI and also LV2 (cache only).
  • A File System browser and tree-view is finally integrated as a dockable-widget on the main application window (cf. main menu View / Window / File System).
  • Drag-and-dropping of session, audio and MIDI files over the main track-list (left pane) is now possible, allowing for yet another quick means to open a new session or add new tracks to the current session.
  • MIDI input/capture time-stamping has been fixed as much to avoid missing inbound events, when play-head is near the loop-end point and the loop-start is set below the absolute first half-a-second (
  • LV2 Time/Transport speed information is now set on rolling when in audio export aka freewheeling mode.
  • Added *.SF3 to soundfont instrument files filter, on View > Instruments... > Import... file dialog.
  • A brand new View/Options.../Display/Meters/Show meters on track list/left pane option has been added.


Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.


Project page:


Git repos:

Wiki (help wanted!):


Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun.

Flattr this

by rncbc at December 04, 2017 07:00 PM


A free Max4Live device from Notstandskomitee

I shared one of my MaxForLive devices, a timedomain based freezer made of 16 independent delaylines, good for creating drones. A concept I keep implementing on diverse platforms and used for instance on Notstandskomitee album The Golden Times but also for TMS in form of a PD patch on our Raspberry Pi based effect unit.

by herrsteiner ( at December 04, 2017 02:09 PM

December 02, 2017


Tina Mariane Krogh Madsen: Body Resonance

The sounds from Tina Mariane Krogh Madsen's installation Body Resonance which was exhibited at Liebig12 in Berlin in June 2017 are now online in their full length for your hard-hitting listening pleasure!

If you wish to purchase the editioned publication, created especially for this piece (includes a full transcription of the performed actions, limited and numbered), send a message to

info AT tmkm DOT dk

by herrsteiner ( at December 02, 2017 10:45 PM


04: Post Sonoj and Winter Plans


Its been a while since the last update – so whats new in OpenAV land? Well the Sonoj event took place, where the OpenAV Ctlra hardware access library was a demo! More details were shared about the intended goal of Ctlra library, and what obstacles we as a community need to overcome to enable everybody to have better hardware workflows!

Winter Plans

OK – Ctlra made some progress, but what is going to happen over the next few weeks / months? More Ctlra library progress is expected, everything from improving the sensitivity of drum pads to adding a 7-segment display widget to the virtual device user-interface.

So much for the easy part – the hard part is the mapping infrastructure for hardware and software – and OpenAV is looking at that problem, and prototyping various solutions at the moment. No promises – but this is currently the #1 problem causing hardware-based workflows to not integrate well for the majority of musicians in the Linux audio community….

Stay tuned!

by Harry at December 02, 2017 12:44 AM

December 01, 2017

open-source – CDM Create Digital Music

MusicMakers Hacklab Berlin to take on artificial minds as theme

AI is the buzzword on everyone’s lips these days. But how might musicians respond to themes of machine intelligence? That’s our topic in Berlin, 2018.

We’re calling this year’s theme “The Hacked Mind.” Inspired by AI and machine learning, we’re inviting artists to respond in the latest edition of our MusicMakers Hacklab hosted with CTM Festival in Berlin. In that collaborative environment, participants will have a chance to answer these questions however they like. They might harness machine learning to transform sound or create new instruments – or even answer ideas around machines and algorithms in other ways, through performance and composition ideas.

As always, the essential challenge isn’t just hacking code or circuits or art: it’s collaboration. By bringing together teams from diverse backgrounds and skill sets, we hope to exchange ideas and knowledge and build something new, together, on the spot.

The end result: a live performance at HAU2, capping off a dense week-plus festival of adventurous electronic music, art, and new ideas.

Hacklab application deadline: 05.12.2017
Hacklab runs: 29.1 – 4.2.2018 in Berlin (Friday opening, Monday – Saturday lab participation, Sunday presentation)

Apply online:
MusicMakers Hacklab – The Hacked Mind – Call for works

We’re not just looking for coders or hackers. We want artists from a range of backgrounds. We want people to wrestle with machine learning tools – absolutely, and some are specifically designed to train to recognize sounds and gestures and work with musical instruments. But we also hope for unorthodox artistic reactions to the topic and larger social implications.

To spur you on, we’ll have a packed lineup of guests, including Gene Kogan, who runs the amazing resource ml4a – machine learning for artists – and has done AV works like these:

And there’s Wesley Goatley, whose work delves into the hidden methods and biases behind machine learning techniques and what their implications might be.

Of course, machine learning and training on big data sets opens up new possibilities for musicians, too. Accusonus recently explained that to us in terms of new audio processing techniques. And tools like Wekinator now use training machines as ways of more intelligently recognizing gestures, so you can transform electronic instruments and how they’re played by humans.

Dog training. No, not like that – training your computer on dogs. From ml4a.

Meet Ioann Maria

We have as always a special guest facilitator joining me. This time, it’s Ioann Maria (pictured, at top/below), whose AV / visual background will be familiar to CDM readers, but who has since entered a realm of specialization that fits perfectly with this year’s theme.

Ioann wrote a personal statement about her involvement, so you can get to know where she’s come from:

My trip into the digital started with real-time audiovisual performance. From there, I went on to study Computer Science and AI, and quickly got into fundamentals of Robotics. The main interest and focus of my studies was all that concerns human-machine interaction.

While I was learning about CS and AI, I was co-directing LPM [Live Performers Meeting], the world’s largest annual meeting dedicated to live video performance and new creative technologies. In that time I started attending Dorkbot Alba meet-ups – “people doing strange things with electricity.” From our regular gatherings arose an idea of opening the first Scottish hackerspace, Edinburgh Hacklab (in 2010 – still prospering today).

I grew up in the spirit of the open source.

For the past couple of years, I’ve been working at the Sussex Humanities Lab at the University of Sussex, England, as a Research Technician, Programmer, and Technologist in Digital Humanities. SHL is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

I provide technical expertise to researchers at the Lab and University.

At the SHL, I do software and hardware development for content-specific events and projects. I’ve been working on long-term jobs involving big data analysis and visualization, where my main focus was to develop data visualization tools — for example, looking for speech patterns and analyzing anomalies in criminal proceedings in the UK over the centuries.

I also touched on the technical possibilities and limitations of today’s conversational interfaces, learning more about natural language processing, speech recognition and machine learning.

There’s a lot going on in our Digital Humanities Lab at Sussex and I’m feeling lucky to have a chance to work with super brains I got to meet there.

In the past years, I dedicated my time speaking about the issues of digital privacy, computer security and promoting hacktivism. That too found its way to exist within the academic environment – in 2016 we started the Sussex Surveillance Group, a cross-university network that explores critical approaches to understanding the role and impact of surveillance techniques, their legislative oversight and systems of accountability in the countries that make up what are known as the ‘Five Eyes’ intelligence alliance.

With my background in new media arts and performance, and some knowledge in computing, I’m awfully curious about what will happen during the MusicMakers Hacklab 2018.

What fascinating and sorrowful times we happen to live in. How will AI manifest and substantiate our potential, and how will we translate this whole weight and meaning into music, into performing art? It going to be us for, or against the machine? I can’t wait to meet our to-be-chosen Hacklab participants, link our brains and forces into a creative-tech-new – entirely IRL!

MusicMakers Hacklab – The Hacked Mind – Call for works

In collaboration with CTM Festival, CDM, and the SHAPE Platform.
With support from Native Instruments.

The post MusicMakers Hacklab Berlin to take on artificial minds as theme appeared first on CDM Create Digital Music.

by Peter Kirn at December 01, 2017 05:42 PM


new Notstandskomitee track now and album in 2018

Notstandskomitee is working on a new album, due for 2018. Here is a first demo which can be downloaded for a while, grab it while you can.

by herrsteiner ( at December 01, 2017 01:54 PM

November 29, 2017

Audio – Stefan Westerfeld's blog

gst123-0.3.5 and playback rate adjustment

A new version of gst123, my command line media player – based on gstreamer – is available at

Thanks to David Fries, this version supports playing media faster or slower compared to the original speed,  using { [ ] } as keyboard commands. This works, however, it also changes the pitch. So for instance speech sounds unnatural if the playback rate is changed.

I’ve played around with the youtube speed setting a bit, and they preserve pitch while changing playback speed, providing acceptable audio quality. There are open source solutions for doing this properly, we could get comparable results if we used librubberband (GPL) to correct the pitch in the pipeline after the actual decoding. However, there is no librubberband gstreamer plugin as far as I know.

Also there is playitslowly does the job with existing gstreamer plugins, but I think the sound quality is not as good as what librubberband would do.

I think ideally, playback pitch correction should not be done in gst123 itself (as other players may want to use the feature). So if anybody feels like working on this, I think it would be a nice project to hack on. Feel free to propose patches to gst123 for pitch correct playback rate adjustments, I would be happy to integrate it, but maybe it should just go into the playbin (maybe optional, as in 1. set playback rate, 2. enable pitch correction), so the code could live in gstreamer.

by stw at November 29, 2017 04:43 PM

November 24, 2017

open-source – CDM Create Digital Music

Watch a completely mental set of MeeBlip synth stop motion animations

You’ve got your acid basslines. Then, you’ve got your acid trips involving a bass synth. Roikat takes us in the direction of the latter.

Creatures dance around urban streets. AI deep dream wildlife stares at you on title cards. Worms amiably amble from car doors and make their way onto the amplitude knobs.

And there are cats. Of course there are cats.

It’s all adorable stop motion with the raw sounds of our MeeBlip synth and no, I really didn’t have any idea this was going to happen until I spotted it on YouTube. Roikat is evidently both animator and MeeBlip composer. The combination is brilliant. I’d go for a whole show.

Your sound demos will never be the same. Behold:

Of course, perhaps the wildest of all is this … ultrasonic demo?! (Watch it drive your cats crazy.)

Plus there was a Halloween jam some time back

Whoever you are, Roikat, you’re crazy and a genius. Looking forward to more synth vids and those promised presets for Dave Smith – we’ll share them here!

The MeeBlip in question here is anode series, but our triode is closely related to the anodes – and it’s on a Black Friday sale now with a lower price and all the cables you need included:

by Peter Kirn at November 24, 2017 10:05 PM

November 21, 2017

KXStudio News

Breaking changes in Carla Plugin Host

Hello everyone, I have some bad and good news about Carla.
If you've been following the development on the git repository you likely know what this is about.
There were some major changes done to Carla's code base in the past few days.

The biggest change is the removal of the Juce library.
The reasons for this are well known by some developers, but I'll not write about them here.
After looking around for alternatives, I decided to fork an older GPLv2 compatible version of Juce and strip it down to the really essential parts needed to get Carla to build and run - even if it meant losing some of the features.
The possibility to change to an entirely different C++ framework crossed my mind, but the amount of effort and breaking changes would be too big.
I called the end result 'water'. You can say Carla doesn't need Juce, water is fine ;)
There's only a few classes and files needed for I/O, XML and AudioGraph handling, everything else is gone. \o/

The implications for this change are not big for Linux users, and is even a source of good news for other OpenSource Operating System users like FreeBSD and HaikuOS.
In short, because Juce is no longer there, we have lost support for VST3 and AudioUnit plugins.
Plus VST2 plugins on Windows and MacOS are now handled by Carla's code instead of relying on Juce.
This heavily reduces the amount of compatible plugins handled by Carla, because Juce had a lot of hacks in order to make a lot of commercial plugins run properly.
Also Carla on Windows and MacOS used Juce to handle Audio and MIDI devices, which now has been changed to RtAudio and RtMidi.
RtAudio & RtMidi are not as fully-featured as Juce was (we lose dynamic MIDI ports, for example), but I am glad to have Juce gone from the code-base.
(You can say that parts of it are still there, but my conscience is clear, and Carla remains self-contained which was my main point since v2.0 development started)

The next breaking change relates to the internal plugins used in Carla.
The plugins that already exist as LV2 will stop being exported with the carla.lv2 bundle.
Plus these plugins will soon be removed from the default build.
They quickly bloat the Carla binaries, as they include their artwork. Not to mention increasing the clone and building times.
The plan is to have them disabled by default and moved into a new repository as submodule.
Oh and the "experimental" plugins are going away soon. It was a mistake to make them Carla-specific in the first place, they should be regular audio plugins instead.

Another breaking change is the removal of modgui support.
The code only worked for PyQt4, which is no longer the default for Carla source-based builds.
Plus it required webkit, which brings a big list of dependencies. I would have to port the code to webengine/chromium to make it work with PyQt5... no thanks.

The final breaking change is the introduction of the Experimental option in Carla's settings.
Everything that is not stable at the moment went there as an option, and got disabled by default. This includes:

  • Plugin bridges
  • Wine options
  • Force-stereo mode
  • Canvas eye-candy
  • Canvas with OpenGL

All new in-development / testing features will get introduced as experimental first.
This will speed up the release of 2.0, since not everything needs to be finished for it.
For example, plugin bridges can still be there and not fully implemented, and we still have 2.0-stable out!

That's it! Thanks for reading so far.
In other news, I gave a small presentation about Carla in this year's Sonoj Conference.
You can check it out here:

Carla 2.0-beta6 will be out soon :)

by falkTX at November 21, 2017 10:19 PM

open-source – CDM Create Digital Music

$30 programmable, open Arduino ArduTouch synth is here

It’s $30. It can teach you how to code – or it can just be a fun, open synth. The ArduTouch by Mitch Altman is now shipping.

I wrote about ArduTouch earlier, with loads more on the instrument’s creator:
ArduTouch is an all-in-one Arduino synthesizer learning kit for $30

It’s a simple digital instrument based on the open source Arduino prototyping and coding platform, meaning it connects to an environment widely used by artists, hobbyists, and educators. Now Mitch shares that the product is available and shipping – and because this is an open source project, there’s a dump of new code, too.

And, I just uploaded the latest version of the ArduTouch Arduino sketches, including more way cool synthesizers, and a new Arduino library including more example synths (that also act as tutorials on how to create your own synthesizers).

Arduino-based synth projects have been here and there in some form back to the early days of Arduino. And of course Arduino as a platform is often a starting point into hardware development, even for students who have never written a line of code in their lives.

What’s cool about this is, you get a reliable platform on which to upload that code, and a touch interface and speaker so you can hear results. Plus, one of Mitch’s special superpowers has long been his ability to get others involved and to teach in an accessible way – so working through his code examples is a great experience.

This being Arduino, you can program over USB.

There are some really nice, musical ideas in there – like this is something that will make sense to musicians, not just to people who like mucking about with hardware. And since the code is out there, it could inspire other such projects, even on other platforms.

Proof that it makes noises – though, of course, you’re welcome to try and make noises you like!

I’m hoping to have one for my mini-winter-holiday break (uh, whichever winter holiday I manage to wrap that around… let’s hope not St. Patrick’s Day, but sooner!)

Have at it:

The post $30 programmable, open Arduino ArduTouch synth is here appeared first on CDM Create Digital Music.

by Peter Kirn at November 21, 2017 09:07 PM

November 20, 2017

MOD Devices Blog

Tutorial: Control Chain distance sensor

Hi again to all MOD and Arduino enthusiasts!

We’ve been working on Control Chain devices for quite a while now since the last post and we feel like it’s time to add another example to the Control Chain library. So here’s another blog post to show how simple it is to build controllers for your MOD Duo.

As some of you saw on our Instagram page, we hooked up an ultrasonic distance sensor to the Arduino Control Chain shield and it was really fun to play around with.


  1. One Arduino Uno or Due
  2. One Arduino Control Chain shield
  3. One HC-SR04 Distance Sensor (or any other ultrasonic sensor, they mostly work the same way)
  4. Some soldering tin
  5. Some wire
  6. (Optional) Some 10K linear potentiometers
  7. (Optional) Something to put your build in


schematic arduino shield

Schematic for the Ultrasonic sensor build

The schematic of this build is quite straightforward. The sensor has 4 pins; VCC, GND, Echo and Trigger. These are connected to the Arduino shield.

For this example, 2 potentiometers are used for the minimal and maximal values that the sensor measures. Also, another potentiometer is used as a ‘sensitivity’ potentiometer.

Notice the brackets on sensitivity, because this variable does not change the behavior of the sensor at all. It only controls a weighted average filter function in the code. This filter smoothes out inaccurate measurements that may occur in the sensor.


When I started making this code, I could not delay the main program for longer than a few milliseconds because I was using an older version of the Control Chain library. The libraries that are often used for this sensor, do delay the main loop for too long. Reading through the datasheet/library however, gave a better insight on how the sensor works.

Now the PulseIn function is used to manually read/write to the trigger and echo pin of the distance sensor. Using the Control Chain library version 0.5.0 and up, you can just use the library from the sensor manufacturer, but because we can do it manually let’s leave it like this. It saves up some memory on the Arduino!

At the beginning of the code, there is a #define that you can use to invert the way the distance corresponds to the actuator. By setting this #define to 0, the smaller the distance, the lower the actuator’s value. When this #define is set to 1, the smaller the distance, the bigger the actuator’s value. When playing around with this in the office, we found that it is really a matter of preference so that’s why we left it easily changeable in the code.


  1. Solder the Vcc pin of the sensor to the +5 track of the CC shield
  2. Solder the Ground pin of the sensor to the GND track of the CC shield
  3. Solder the echo pin to the corresponding digital pin (by default pin 10) of the CC shield
  4. Solder the trigger pin to the corresponding digital pin (by default pin 11) of the CC shield
  5. (Optional) Solder the potentiometers outer pins to +5V and GND of the CC shield
  6. Solder the potentiometers inner pin to the corresponding analog input of the CC shield (by default A0, A1 & A2)


1. Follow the instructions on our Github page and install the dependencies.

2. Change the defines to your preference and if you don’t want to use potentiometers change the analogRead() functions with constant values like this:

Line 88: PotValue = 0.5; //((analogRead(A0)/1023.0);
Line 124: MINDISTANCE = 20; //map((analogRead(A1)), 0, 1023, 5, 20);
Line 125: MAXDISTANCE = 50; //map((analogRead(A2), 0, 1023, 20, 65);

In line 88 you set the ‘sensitivity’ of the sensor between 0 and 1. In line 124 and 125 you can set the minimal and maximal value of the sensor in centimeter.

3. Upload the code to your Arduino

4. All done, time to test!

5. Connect the CC shield to your MOD Duo. If everything went well you should see a new CC device popping up

Control Chain device on MOD GUI


6. Assign the plugin parameter of your choice to the CC device actuator.

Ultrasonic sensor addressing

Address it like any actuator on the GUI

7. Voilà! You should have an up and running distance-controlled actuator.


Ultrasonic sensor in a box

Our own custom build with XF4 prototype scrap

(Optional) You can put your build in a cool box, I used some old 3D-prints which were used for the XF4 prototypes. It seemed like a nice fit!

Inside Arduino Control Chain Ultrasonic sensor for MOD Duo

Fits nicely!

You just finished building your own Control Chain distance sensor. We hope this is helpful and inspires you guys to also make some crazy controllers!

Don’t hesitate to come and talk to us on the forum if you have any questions about Control Chain devices, the Arduino or anything else! Users have also been busy with their own creations and have been sharing them on the forum. Check them out here or here!

And keep on rocking!



by Jan Janssen at November 20, 2017 06:12 PM

GStreamer News

Orc 0.4.28 bug-fix release

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • Numerous undefined behaviour fixes
  • Ability to disable tests
  • Fix meson dist behaviour

Direct tarball download: orc-0.4.28.

November 20, 2017 05:00 PM

November 14, 2017

open-source – CDM Create Digital Music

Two new ways to integrate MeeBlip triode synths with Ableton Live, free

Software control means preset recall and easy automation, on top of all that tactile control. Here’s the latest combination of our MeeBlip and Max for Live.

I don’t know exactly what astrological event causes people to decide to want to create controller layouts in Max for Live for the MeeBlip triode. But whatever it is, two friends wrote me last night from two different hemispheres to say they’d decided that they needed to create a tool for using their MeeBlip monosynths. And, with no contact with one another, they both released their work within a few hours.

Here’s what that means for you.

MeeBlip triode is our affordable, red-colored hardware synth with a friendly, edgy voice and analog filter. And we’re down to the end of this run, but … there are a few left. Plus, nice timing (they really didn’t know this) – we’ve just started our Black Friday sale early, with all the free cables you need and free North American shipping.

Ableton Live, so long as you’ve got Live Suite (that is, Max for Live included), lets you include devices that control hardware synths. Since everything you see on the front panel of triode can be controlled by MIDI – plus a few things that aren’t even there – using these add-ons lets you automate and store and recall presets.

Why would you want to do that, given you’ve already got this box with knobs and switches? Well, you might want to store and recall presets with a particular Live project, so your ‘blip is sounding the same way when you load it up and get back to work, or to save a sound you really like. And you might want to use Live’s automation controls to sculpt your sound as part of a pattern, by drawing it in or using Push hardware.

And from there, you can add additional features.

Both of these devices are free, so you can grab both and see which you like best. From South American virtuoso hypergeek Gustavo Bravetti, comes a cute, color-coordinated design. It looks nicest, and also includes full resend, a helper for drawing envelopes, and more:

Triode CTRL 1.0

Don’t miss Gustavo’s amazing performances and so on via his Facebook artist page. Check the videos:

And in this corner:

Kent Williams aka Chaircrusher has made something that isn’t quite as pretty. But as it’s based on previous, similar work, it might be a way to learn how to make these for yourself. Kent says it’s “blindingly” simple – which is seriously a good thing when you’re learning. And since the MeeBlip is nice and simple, it makes a great template.

Meeblip Triode Control 0.01

Kent’s also an awesome musician, so check out:

What? You don’t have a triode?

We can help.

Let’s start Black Friday early. Let’s start your holiday shopping season early – by making sure you (or a lucky person who’s getting a triode gift) gets all the cables the triode needs.

So now, triode includes our audio & MIDI cable bundle ($24.95 value) until November 30, or while supplies last. Free shipping in the USA. As always, our power adapter is included. And this on top of our new everyday US$119.95 price.

Have at it:

Get a MeeBlip triode synth

by Peter Kirn at November 14, 2017 06:15 PM

November 12, 2017


block 4 newsletter

Finally we going to have a monthly block 4 newsletter. People recommended it to us for years but we ignored it, concentrating on Facebook after Myspace went down.
In October the socalled organic reach of Facebook declined once more dramatically, meaning that our posts are not reaching you anymore. Facebook altered their mechanism what posts are shown to you to sell more ads. It also became difficult to invite a certain amount of people to Facebook events, hurting underground artists, independent lables and small spaces.
They need to make ends meet but so do we. We tried ad campaigns in the last months to get the word out about our activities but the results are not convincing. Clicks and likes came in but frankly, these people doesn't look like they listen to our music and fancy our art. Don't get us wrong, block 4 embraces everyone and there have been recorded cases of Texas housewives listening to our music in the last 3 decades of Block 4s existence, but 100s of them smells like click farm.
So we decided to try something new for us, a monthly newsletter so you don't miss out anything. Its hosted through Mailchimp, be one of the first to sign up here:

by herrsteiner ( at November 12, 2017 03:13 PM

November 07, 2017

open-source – CDM Create Digital Music

Let’s talk about open tools at Ableton Loop and beyond

From libraries to circuits to hacks to instructions, a lot of you are sharing the stuff you make. We’re using Ableton Loop to bring some of you together.

Ableton’s Loop festival/conference/summit is now more than just a get-together for Ableton users. It’s become a kind of international music happening. And so lots of interesting folks are gathering here in Berlin later this week.

That’s just a tiny, tiny fraction of the people reading this, though. Now, if only we could get more of you here, sort of virtually.

With that in mind, I’m going to do an open call for any kind of project you’d like to share. I’ll survey these and keep tabs on them here in CDM. And for those of us who are gathering in Berlin Sunday, we can share in person and get back to all of you through the power of the Internet.

By “open,” I mean anything that has some kind of permissive license for copying and modification, or that’s totally free. It could be a project for making contact mics or documenting how to make field recordings, too – not just software and hardware. And it doesn’t have to be Ableton-related, either – I do expect a good mix of people already at this event.

Of course, with open source tools, this is really important. Just making something open source doesn’t necessarily get people to collaborate on it. So if you want to invite users, testers, collaborators, and other feedback, you need to make connections.

Here’s the notion, as described on the Loop site:

A get-together to exchange, discover, and collaborate on open and handmade hardware and software.

Sometimes, realising the sounds in your imagination means making or modding your own tools and instruments.This meetup is a chance for us to share these inventions, born of necessity, with each other. CDM editor Peter Kirn talks about how to use open licensing to allow collaboration and learning, and takes a look at some of the more interesting creations in today’s global music community. Then, he’ll hand the floor over to you. Pack your own handmade gear, custom code, patches or hacks if you’ve got them, and be ready to play with others.

Open Tools Meetup [Ableton Loop; Sunday, 11-13:00 Maker Zone]

And if you want to submit your project for that get-together (or later coverage on CDM), fire away here! I’m curious what you’re working on.

After all, CDM is what it is – and arguably Ableton Live, too – because of people getting started with creative controllers, hacks, and new ways of making and playing music. It’s time to check in on the state of that landscape, and the stuff you’re most passionate about.

(and yeah, if you sent something lately and I ignored it, please don’t be shy about nagging me now! Only so many hours in the day…)

For added inspiration: Let’s remember those who came before. Grandmaster Flash, pictured here, showing some DIY futurism. Via the wonderful Leah Buechley.

The post Let’s talk about open tools at Ableton Loop and beyond appeared first on CDM Create Digital Music.

by Peter Kirn at November 07, 2017 09:56 PM

November 05, 2017


TMS concert video from the London gig

TMS movement(al) distortion(s) performed at Sounding DIY @ IKLECTIK in London on October 5th 2017.

by herrsteiner ( at November 05, 2017 12:08 PM

November 04, 2017

Libre Music Production - Articles, Tutorials and News

October 29, 2017

Vee One Suite 0.8.5 - An Autumn'17 release


The Vee One Suite of so called old-school software instruments, synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer, drumkv1 as yet another drum-kit sampler and padthv1 as a polyphonic additive synthesizer, are here released for the seasonal greetings.

All still available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The common change-log for this Fall goes like this:

  • Sample files are now saved as symlinks when saving to JACK and/or NSM session directories/folders (applies to samplv1 and drumkv1 only).
  • Opening multiple preset files is now possible, populating the preset drop-down listing, while only the first one is loaded effectively into the scene as usual.
  • Mono(phonic) "Legato" mode option introduced.
  • Desktop entry specification file is now finally independent from all build/configure template chains, whatever.
  • Updated target path for's AppStream metainfo file (formerly AppData).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And now, in reverse order of appearance:


padthv1 - an old-school polyphonic additive synthesizer

padthv1 0.8.5 (autumn'17) released!

padthv1 is an old-school polyphonic additive synthesizer with stereo fx

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.




git repos:

Flattr this


drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.5 (autumn'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.




git repos:

Flattr this


samplv1 - an old-school polyphonic sampler

samplv1 0.8.5 (autumn'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.




git repos:

Flattr this


synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.5 (autumn'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.




git repos:

Flattr this


Enjoy && have fun ;)

by rncbc at October 29, 2017 06:00 PM

October 27, 2017

open-source – CDM Create Digital Music

Leave this free software running, and it’ll come up with rhythms for you

Have you ever wanted to enslave your own Aphex Twin, then have him make endless rhythms for you, but worried about care and feeding of a Richard D. James?

Do you want to soak up the glory of the life of an IDM musician (the touring in helicopters, the seven-figure royalties), but want to avoid the actual work of making the music?

Well, then this Csound-based tool is for you. Run it, and it spits out a nice random rhythm or two. Leave it running, and it’ll generate a whole folder full of rhythms and various bpm. Dump those into Ableton Live, pick out the ones you like, and … ah, okay, now you will have to do some work turning this into music. (Effects …. maybe. Arrangement … well, or just loop one endlessly and pop off for lunch. Or make them into something new, original, and very much your own. Kind of up to you, really, though soon we should have some machine learning that decides for you what you probably would like to choose.)

It’s all the fault – erm, work – of one Micah Frank, who actually makes his living as a sound designer. (Meaning, of course – Micah what are you doing?!) Switch it on, and wait for hundreds of sounds to come your way.

Right now, it’s pretty simple – and it takes all night because it’s real-time, not offline. (On the other hand, you could output sound and have lovely, very weird and erratic, sonic wallpaper.) But Micah plans lots of additional features here, plus a whole compositional environment.

So there you have it. Skip the all nighter. Catch up on sleep.

You saw it here first.

Nice to see this sketch from when this was conceived.

The post Leave this free software running, and it’ll come up with rhythms for you appeared first on CDM Create Digital Music.

by Peter Kirn at October 27, 2017 03:05 PM

October 26, 2017

digital audio hacks – Hackaday

Raspberry Pi Media Streamer Is Combat Ready

We are truly living in the golden age of media streaming. From the Roku to the Chromecast, there is no shortage of cheap devices to fling your audio and video anywhere you please. Some services and devices may try to get you locked in a bit more than we’d like (Amazon, we’re looking at you), but on the whole if you’ve got media files on your network that you want to enjoy throughout the whole house, there’s a product out there to get it done.

But why buy an easy to use and polished commercial product when you can hack together your own for twice the price and labor over it for hours? While you’re at it, why not build the whole thing into a surplus ammo can? This the line of logic that brought [Zwaffel] to his latest project, and it makes perfect sense to us.

It should come as no surprise that a military ammo can has quite a bit more space inside than is strictly required for the Raspberry Pi 3 [Zwaffel] based his project on. But it does make for a very comfortable wiring arrangement, and offers plenty of breathing room for the monstrous 60 watt power supply he has pumping into his HiFiBerry AMP+ and speakers.

On the software side the Pi is running Max2Play, a Linux distro designed specifically for streaming audio and video remotely. [Zwaffel] says that with this setup he is able to listen to music on his Squeezebox server as well as watch movies via Kodi.

While none are quite as battle-hardened as this, we have seen several other Raspberry Pi Squeezebox clients over the years if you’re looking for more inspiration.

Filed under: digital audio hacks, Raspberry Pi

by Tom Nardi at October 26, 2017 03:30 PM

October 23, 2017

digital audio hacks – Hackaday

The Grafofon: An Optomechanical Sequencer

There are quick hacks, there are weekend projects and then there are years long journeys towards completion.  [Boris Vitazek]’s grafofon falls into the latter category. His creation can best be described as electromechanical sequencer synthesizer with a multiplayer mode.
The storage medium and interface for this sequencer is a thirteen-meter loop of paper that is mounted like a conveyor belt. Music is composed by drawing on the paper or placing objects on it. This is usually done by the audience and the fact that the marker isn’t erased make the result collaborative and incremental.
 These ‘scores’ are read by a camera and interpreted by software.This is a very vague description of this device, for a reason: the build went on over six years and both hard- and software went through several revisions in that time. It started as a trigger for MIDI notes and evolved from there.
In his write up [Boris] explains the technical aspects of each iteration. He also tells the stories of the people he met while working on the grafofon and how they influenced the build. If this look into the art world reminds you of your local hackerspace, it is because these worlds aren’t that far apart.

We sure do like large musical machines like this contraption by [Wintergatan] and sequencers made from random stuff also get our love. If this kind of project piques your interest, be sure to check out the ‘musical hacks’ category below.

Filed under: digital audio hacks, Musical Hacks

by Christian Trapp at October 23, 2017 11:00 AM

October 19, 2017

News – Ubuntu Studio

Ubuntu Studio 17.10 Released

We are happy to announce the release of our latest version, Ubuntu Studio 17.10 Artful Aardvark! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]

by rosco2 at October 19, 2017 03:56 PM

October 16, 2017

open-source – CDM Create Digital Music

Jazzari lets you sketch musical ideas in your browser, with JavaScript

Open up a browser tab, use code sketch musical loops and grooves (using trigonometry, even), and play / export – all in this free tool.

Jazzari has been making the rounds among passionate music tech nerds, as a lovely free code toy. There are a bunch of easy-to-modify tutorial examples, so you don’t necessarily have to know any JavaScript or code. But there’s no graphical control at all – that visualization and the cute cartoon characters are just to give you feedback on what the code does.

So — why?

Developer Jack Schaedler is quick to caution that this is neither intended for teaching code nor teaching music, that better tools exist for each. (Sonic Pi is a particularly accessible entry for learning how to express musical ideas as code, used even by kids!)

Then again, you don’t have to believe him. That same spirit that made him decide to do this for fun seems to be infectious. And this might be an entry into making this stuff.

For coders, it’s yet another chance to discover some code and libraries and perhaps bits and pieces and inspiration for your own next project. For everyone else, well, it’s a terrific distraction.

And you can export MIDI, so this could start a new musical project.

By the way, someone want to join me in building this actual inspiration for Jazzari? It could be killer by next summer, at least.

The name is a riff on the 12th century scholar and inventor Ismail al-Jazari. al-Jazari is thought to have invented one of the first programmable musical machines, a “musical automaton, which was a boat with four automatic musicians that floated on a lake to entertain guests at royal drinking parties.”

Bonus, for my Arabic, Kurdish, and Persian friends in electronic music – no one knows which of those accurately can claim this guy. We clearly need to get something going.

The post Jazzari lets you sketch musical ideas in your browser, with JavaScript appeared first on CDM Create Digital Music.

by Peter Kirn at October 16, 2017 09:41 AM

October 07, 2017


03: OpenAV @ Sonoj

Hey folks!

Some of you are probably aware of the Sonoj Convention, well OpenAV is going to be talking about hardware and software there – demonstrating the latest progress in integrating hardware controllers with audio software! Are you in the Cologne area the 4th or 5th of November? You should attend too  : )

Interested in details? We’re gonna talk about what ~2000 lines of code means to the user of Ctlra enabled software, and how 13 lines of code make that useful to a user! It enables integration of hardware in novel ways… even if you don’t have access to the hardware!

Looking forward to seeing you all at Sonoj! -Harry of OpenAV

by Harry at October 07, 2017 10:48 AM

October 04, 2017


0.4.6 released

A new version of aubio, 0.4.6, is now available.

This version includes:

  • yinfast, a new version of the YIN pitch detection algorithm, that uses spectral convolution to compute the same results as the original yin, but with a cost O(N log(N)), making it much faster than the plain implementation (O(N^2))

  • Intel IPP optimisations (thanks to Eduard Mueller), available for Linux, MacOS, Windows, and Android

  • improved support for emscripten (thanks to Martin Hermant), which compiles the aubio library as a javascript module and lets you run aubio's algorithm directly from within a web-page.

0.4.6 also comes with several bug fixes and improvements.

Many thanks to Eduard Mueller (@emuell), Martin Hermant (@MartinHN), Hannes Fritz (@hztirf), Stuart Axon (@stuaxo), Jörg (@7heW4yne), ssj71 (@ssj71), Andreas Borg (@borg), Rob (@mlrobsmt) and everyone else for their valuable contributions and input.

read more after the break...

October 04, 2017 11:45 AM

Analyzing songs online

When built with ffmpeg or libav, aubio can read most existing audio and video formats, including compressed and remote video streams. This feature lets you analyze directly audio streams from the web.

A powerful tool to do this is youtube-dl, a python program which downloads video and audio streams to your hard-drive. youtube-dl works not only from youtube, but also from a large number of sites.

Here is a quick tutorial to use aubio along with youtube-dl.

read more after the break...

October 04, 2017 10:34 AM

October 03, 2017 - LAD

Suil 0.10.0

suil 0.10.0 has been released. Suil is a library for loading and wrapping LV2 plugin UIs. For more information, see


  • Add support for X11 in Gtk3
  • Add support for Qt5 in Gtk2
  • Add suil_init() to support early initialization and passing any necessary information that may be needed in the future (thanks Stefan Westerfeld)
  • Fix minor memory errors
  • Fix building with X11 against custom LV2 install path (thanks Robin Gareus)

by drobilla at October 03, 2017 09:00 PM

September 29, 2017

Audio – Stefan Westerfeld's blog

SpectMorph 0.3.4 released

A new version of SpectMorph, my audio morphing software is now available on

The biggest addition is an ADSR-Envelope which is optional, but when enabled allows overriding the natural instruments attack and volume envelope (full list of changes).

I also created a screencast of SpectMorph which gives a quick overview of the possibilities.

by stw at September 29, 2017 04:08 PM

September 20, 2017

Qtractor 0.8.4 - End of Summer'17 release!

Yes, it's been like clockwork...

Every two months or so, you stumbled with a brand new dot release, code-named after some of the same adjective-plus-noun (or vice versa) code-names. You knew the thrill and yet it lands no more.

First, the code-naming joke has been just a parody--or was it the other way around?--to some well known Linux-distro alimalistic code-name series. Then on it got rogue into some directed puns--remember the date when BitWig Studio 1.0 was first released? Yeah, the "Byte Bald" was there for the pun, on the very same day :)

Well, that's all gone by now.

Northern hemisphere seasons are the new norm and that's about two main reasons: first, it's where I live; second, due to an undeniable global warming effect pervading the globe, all geographical temperate zones are simply on the edge of extinction. More or less in a couple of decades or so. For the sake of brevity, I will just leave it like paying homage to those natural concepts that are facing an inexorable fate.

And yet, there're still the good news:

Qtractor 0.8.4 (end of summer'17) released!


  • Assigned MIDI Controllers to plug-in's Activate switch are now finally saved and (re)loaded properly across sessions.
  • Audio clip panning option property is now being introduced.
  • Out-of-process (aka. dummy) VST plug-in inventory scanning now restarts automatically and resumes processing in case of a premature exit/crash; VST plug-in inventory scan/cache persistency is now in place.
  • Desktop entry specification file is now finally independent from build/configure template chains.
  • Updated target path for's AppStream metainfo file (formerly AppData).
  • Changing the View/Options.../Display/Custom/Style theme takes effect immediately unless it's back to "(default)".
  • Slightly slower but better approximation to IEEE 32bit floating point cubic root ie. cbrtf().


Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.


Project page:


Git repos:

Wiki (help wanted!):


Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep having fun.

Flattr this

by rncbc at September 20, 2017 07:00 PM

September 18, 2017

GStreamer News

GStreamer 1.12.3 stable release

The GStreamer team is pleased to announce the third bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

September 18, 2017 02:30 PM

September 17, 2017


02: Ctlra Virtual Devices

Virtual Ctlra devices? But why do you need or care about that? Read on – this is going to change how you (and the community) work with hardware and software controllers. To state the problem: we all own hardware controllers – MIDI, USB, or something else. Some DAWs support them – allow them to even “map” to different functionality – but it is often difficult and error prone. Whats worse is that if you ask the developer of the DAW for help, they can’t help you because they don’t have access to the hardware… or do they?

Virtual Ctlras!

So this is where virtual devices come in – and save the day. The Ctlra library allows any fully supported Ctlra device to be “virtualized” or simulated by the developer. If a user has an issue with a particular device, the developer has access to the software version of it! A mock-up created by the Ctlra library, can be used instead of real hardware to test and reproduce the users issue.

Developers and Musicians?

What else can be done using a virtual ctlra? Well say you are a musician – and you want your hardware controller to map to an audio looper in a specific way. It doesn’t currently work correctly, and you don’t have the time or experience to create the mapping yourself. With the virtual devices any developer can help you, simulating your controller hardware, and implementing the mapping for you. Perhaps you’re happy with their work, so buy them a beverage in return. The hardware accessibility problem solved!

More More Moarrr!

How about creating a prototype controller using the Ctlra library, testing it for its workflow using a software interface, and later building a physical mockup using an Arduino or RaspberryPi? What if hardware vendors supplied Ctlra drivers with their newly created hardware – the options to utilize and customize how you use their hardware with your favorite software becomes amazing.

Think we’re biting off more than we can chew? Nope – the 84 commits in the last 2 weeks (in the Ctlra repo alone!!) beg to differ: virtual devices are available! Don’t believe we’re going to be able to create UIs on various platforms, and embed them into host applications? Yes we can – checkout the purpose-built AVTKA UI library for creating virtual Ctlra interfaces!

Signoff and Next Up

We hope you’re as excited as us about this whole concept – OpenAV has been working towards this for a long time – and its great to finally get pushing this code out to the community! So what next? Well we can take an in-depth look at the integration of the hardware and virtual controller – that might showcase some of the awesomeness that will be when real-world audio-software gets Ctlra functionality integrated…

-Harry of OpenAV

by Harry at September 17, 2017 09:44 PM

September 16, 2017

Libre Music Production - Articles, Tutorials and News

Ardour 5.12 released

Ardour 5.12 released

Ardour 5.12 has just been released! The main new features in this release involve session/track template management and improvements to MIDI patch changing, as well as the usual bug fixes.

by Conor at September 16, 2017 08:06 PM

September 15, 2017


Ardour 5.12 released

Ardour 5.12 is now available.

Although when Ardour 5.11 was released, we expected a significant gap until 6.0 will be announced, enough notable features and fixes accumulated that it seemed better for us to push out a 5.12 release before we embark on the major code changes that will mark the real start of the development process for 6.0.

Much of the work in this release was sponsored by Harrison Consoles.

Two of the most notable new features are the improvements in functionality to the new session and new track/bus dialogs, which now offer much easier and more powerful ways to use templates. These include dynamic "track wizard" templates that allow you to interactively setup sessions and/or groups of new track/busses very quickly and very easily. This builds on the new template manager dialog introduced in 5.11, and a new less obvious feature: the ability to create dynamic templates with Lua scripts.

Also notable is the new patch selection dialog for MIDI tracks/instruments, which provides an easy and convenient way to preview patches in software and hardware instruments. Naturally, it integrates fully with Ardour's support for MIDNAM (patch definition files), so you will named programs/patches for both General MIDI synths and those with MIDNAM files.


Read full details below ...

read more

by paul at September 15, 2017 10:16 PM

OSM podcast

September 11, 2017


01: Ctlra

Hey! With this new site online, we’d better post some actual content! So we are going to post a articles to show what the summer time was spent developing. There’s a range of projects always going on, but usually we focus on a particular topic. Right now that’s the Ctlra project!


Ctlra is a library to allow software developers interface with hardware devices. Technically, it “abstracts” the details of the hardware device away, and provides the application with “generic events”. Great. But what does it mean to you – the musician on stage? It means any Ctlra enable application (more on that in a future post!) will be easy to control from your hardware control surface. More importantly, not just “input” will work well – its also about feedback – lighting up the controller, displaying useful info on the devices’ integrated screen!

So what is OpenAV actually doing for this? During the last year (since Nov ’16!) we’re writing code, lots of code. Sometimes this code enables your hardware device to actually work on the Linux platform, sometimes it exposes the device in a different way – to allow your audio software easily interact with the device. Checkout the youtube video of the presentation at the LAC (Demo’s start at 23:30!):


Next UP

In the next posts, OpenAV is going to show you what Proof-of-Concept work we’re doing – to demonstrate the value of the Ctlra library. Right now, you need hardware to test if Ctlra support is working as expected… that’s about to change!

Stay tuned, -Harry from OpenAV

by Harry at September 11, 2017 06:19 PM

September 07, 2017

News – Ubuntu Studio

17.10 Beta 1 Release

Ubuntu Studio 17.10 Artful Aardvark Beta 1 is released! It’s that time of the release cycle again. The first beta of the upcoming release of Ubuntu Studio 17.10 is here and ready for testing. You may find the images at More information can be found in the Beta 1 Release Notes. Reporting Bugs If […]

by rosco2 at September 07, 2017 12:32 PM

September 06, 2017

digital audio hacks – Hackaday

Hackaday Prize Entry: SNAP Is Almost Geordi La Forge’s Visor

Echolocation projects typically rely on inexpensive distance sensors and the human brain to do most of the processing. The team creating SNAP: Augmented Echolocation are using much stronger computational power to translate robotic vision into a 3D soundscape.

The SNAP team starts with an Intel RealSense R200. The first part of the processing happens here because it outputs a depth map which takes the heavy lifting out of robotic vision. From here, an AAEON Up board, packaged with the RealSense, takes the depth map and associates sound with the objects in the field of view.

Binaural sound generation is a feat in itself and works on the principle that our brains process incoming sound from both ears to understand where a sound originates. Our eyes do the same thing. We are bilateral creatures so using two ears or two eyes to understand our environment is already part of the human operating system.

In the video after the break, we see a demonstration where the wearer doesn’t need to move his head to realize what is happening in front of him. Instead of a single distance reading, where the wearer must systematically scan the area, the wearer simply has to be pointed the right way.

Another Assistive Technology entry used the traditional ultrasonic distance sensor instead of robotic vision. There is even a version out there for augmented humans with magnet implants covered in Cyberpunk Yourself called Bottlenose.

The HackadayPrize2017 is Sponsored by:

Filed under: digital audio hacks, Wearable Hacks

by Brian McEvoy at September 06, 2017 08:00 PM

September 02, 2017


00: New OpenAV Website!

Hey Everybody!

The OpenAV website had been quiet for a while – but OpenAV has been as busy as ever! We just haven’t been keeping up with posting to social media – that’s all 🙂 So whats been going on? Good question! Lots of coding, learning and re-working of crucial components of the linux-audio world, in order to enable next gen software. Sounds lame, but building novel software requires well designed building-blocks, and sometimes they’re lacking. Stay tuned for future blog posts where we will talk trough some of the cool stuff we’ve been working on.

Of course we attended the Linux Audio Conference (or just LAC) again this year, which was held in France for the first time. OpenAV presented about the Ctlra project – more info available on the Code – Ctlra page!

Thats all for now, stay tuned for the next update! -OpenAV

by Harry at September 02, 2017 11:00 AM

fundamental code

Total Variation Denoising

Working with data is an important part of my day-to-day work. No matter if it’s speech, music, images, brain waves, or some other stream of data there’s plenty of it and there’s always some quality issue associated with working with the data. In this post I’m interested in providing an introduction to one technique which can be utilized to reduce the amount of noise present in some of these classes of signals.

Noise might seem abstract at first, but it’s relatively simple to quantify it. If the original signal, $x$, is known, then the noise, $n$, is any deviation in the observation, $y$, from the original signal.

$$y = x + n$$

Typically the deviation is measured via the squared error across all elements in a given signal:

$$\text{error} = ||x-y||^2_2 = \sum_i (x_i-y_i)^2$$

When only the noisy signal, $y$, is observed it is difficult to separate the noise from the signal. There is a wealth of literature on separating noise and many algorithms focus on identifying underlying repeating structures. The algorithm that this post focuses on is one which reduces the total variation over a given signal. One example of a signal with little variation is a step function:

2017 tv clean

A step function only has one point where a sample of the signal varies from the previous sample. The Total Variation denoising technique focuses on minimizing the number of points where the signal varies and the amount the signal varies at each point. Restricting signal variation works as an effective denoiser as many types of noise (e.g. white noise) contain much more variation than the underlying signal. At a high level Total Variation (TV) denoising works by minimizing the cost of the output $y$ given input signal $x$ as described below:

$$\text{cost} = \text{error}(x, y) + \text{weight}*\text{sparseness}(\text{transform}(y))$$

Mathematically the full cost of TV denoising is:

$$ \begin{aligned} \text{cost} &= \text{error} + \text{TV-cost} \\ \text{cost} &= ||x-y||_2^2 + \lambda ||y||_{TV} \\ ||y||_{TV} &= \sum |y_i-y_{i-1}| \end{aligned}$$

To see how the above optimization can recover a noisy signal, lets look at a noisy version of the step function:

2017 tv noised

After using the TV norm to denoise only a few points of variation are left:

2017 tv denoised

The process of getting the final TV denoised output involves many iterations of updating where variations occur. Over the course of iterations opposing variations cancel out and smaller variations are driven to $\Delta y = 0$. As the number of non-zero points increase a sparse solution is produced and noise is eliminated. For higher values of the TV weight, $\lambda$, the solution will be more sparse. For the noisy step function, $y$ and $\Delta y$ over several iterations look like:

2017 tv tv example

For piecewise constant signals, the TV norm alone works quite well, however there are problems which arise with the output when the original signal is not a series of flat steps. To illustrate this consider a piecewise linear signal. When TV denoising is applied a stair stepping effect is created as shown below:

2017 tv gstv example

One of the extensions to TV based denoising is to add 'group sparsity' to the cost of variation. Standard TV denoising results in a sparse set of points where there is non-zero variation, resulting in a few piecewise constant regions. With the TV norm, the cost of varying at point $\Delta y_i$ within the signal does not depend upon which other, $\Delta y_j,\Delta y_k,\text{etc}$, points vary. Group Sparse Total Variation, GSTV, on the other hand reduces the cost for smaller variation in nearby points. GSTV therefore generally produces smoother results with more gentle curves for higher order group sparsity values as variation occurs over several nearby points rather than a singular one. Applying GSTV to the previous example results in a much smoother representation which more accurately models the underlying data.

2017 tv corn tv

Now that some artificial examples have been investigated, lets take a brief look at some real world data. One example of data which is expected to have relatively few points of abrupt change is the price of goods. In this case we’re looking at the price of corn in the United States 2000 to 2017 in USD per bushel as retrieved from . With real data it’s harder to define noise (or what part of the signal is unwanted); However, by using higher levels of denoising the overall trends can be observed within the time-series data:

2017 tv corn gstv

If this short into was interesting I’d recommend trying out TV/GSTV techniques on your own problems. For more in depth information there’s a good few papers out there on the topic with the original GSTV work being:

  • I. W. Selesnick and P.-Y. Chen, 'Total Variation Denoising with Overlapping Group Sparsity', IEEE Int. Conf. Acoust., Speech, Signal Processing (ICASSP). May, 2013.

  • - contains above paper as well as a MATLAB implementation

And if you’re using Julia, feel free to grab my re-implementation of Total Variation and Group Sparse Total Variation at

September 02, 2017 04:00 AM

August 29, 2017

GStreamer News

GStreamer Conference 2017: Registration now open

About the GStreamer Conference

The GStreamer Conference 2017 will take place on 21-22 October 2017 in Prague (Czech Republic), just before the Embedded Linux Conference Europe.

It is a conference for developers, contributors, decision-makers, students, hobbyists, and anyone else interested in the GStreamer multimedia framework or open source multimedia technologies.

Registration now open

You can now register for the GStreamer Conference 2017 via the conference website.

Early-bird registration for professionals is available until 15th September.

We hope to see you there!

August 29, 2017 12:00 PM

August 28, 2017

digital audio hacks – Hackaday

Turning On Your Amplifier With A Raspberry Pi

Life is good if you are a couch potato music enthusiast. Bluetooth audio allows the playing of all your music from your smartphone, and apps to control your hi-fi give you complete control over your listening experience.

Not quite so for [Daniel Landau] though. His Cambridge Audio amplifier isn’t quite the latest generation, and he didn’t possess a handy way to turn it on and off without resorting to its infrared remote control. It has a proprietary interface of some kind, but nothing wireless to which he could talk from his mobile device.

His solution is fairly straightforward, which in itself says something about the technology available to us in the hardware world these days. He took a Raspberry Pi with the Home Assistant home automation package and the LIRC infrared subsystem installed, and had it drive an infrared LED within range of the amplifier’s receiver. Coupled with the Home Assistant app, he was then able to turn the amplifier on and off as desired. It’s a fairly simple use of the software in question, but this is the type of project upon which so much more can later be built.

Not so many years ago this comparatively easy project would have required a significant amount more hardware and effort. A few weeks ago [John Baichtal] took a look at the evolution of home automation technology, through the lens of the language surrounding the term itself.

Via Hacker News.

Filed under: digital audio hacks, home hacks

by Jenny List at August 28, 2017 05:00 AM

August 27, 2017

Libre Music Production - Articles, Tutorials and News

LMP Asks #24: An interview with Luciano Dato

 LMP Asks #24: An interview with Luciano Dato

This time we talk to Luciano Dato, creator of Noise Repellent, a realtime noise reduction plugin.

Hi Luciano, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

I live in Santa Fe, Argentina and I work as a sysadmin/technician in a small IT company.

by Conor at August 27, 2017 08:48 AM

August 23, 2017

open-source – CDM Create Digital Music

What if you used synthesizers to emulate nature and reality?

Bored with making presets for instruments, one sound designer decides to make presets for ambient reality – and you can learn from the results.

“Scapes” is a multi-year, advanced journey into the idea that the synthesizer could sound like anything you imagine. Once you’ve grabbed this set of Ableton Live projects, you can bliss out to the weirdly natural results. Or you can tear apart the innards, finding everything from tricks on how to make cricket sounds synthetically to a veritable master class in using instruments like Ableton’s built-in FM synthesizer Operator. The results are Creative Commons-licensed (and of course, you can also grab individual presets).

The project is the brainchild of sound designer Francis Preve. Apart from his prolific writing career and Symplesound soundware line, Fran has put his sound design work all over presets for apps, software (including Ableton Live), and hardware.

As a result, no one knows better than Fran how much of the work of making presets focuses on particular, limited needs. And that’s too bad. The thing is, there’s no reason to be restricted to the stuff we normally get in synth presets. (You know the type: “lush, succulent pads” … “crisp leads…” “back-stabbing basslines…” “chocolate-y, creamy nougat horn sections…” “impetuous, slightly condescending 80s police drama keyboard stacks…” or, uh, whatever. Might have made some of those up.)

No, the promise of the synthesizer was supposed to be unlimited sonic possibilities.

If we tend to recreate what we’ve heard, that’s partly because we’re synthesizing something we’ve taken some care in hearing. So, why not go back to the richness and complexity of sound as we hear it in everyday life? Why not combine the active listening of a soundwalk or field recording with the craft of producing something using synthesis, in place of a recording?

Scapes does that, and the results are – striking. There’s not a single sample anywhere in the four ambient environments, which cover a rainy day in the city, a midsummer night, a brook echoing with bird song, and a more fanciful haunted house (with a classic movie origin). Instead, these are multitrack compositions, constructed with a bunch of instances of Operator and some internal effects. Download the Ableton Live project files, and you see a set of MIDI tracks and internal Live devices.

You might not be fooled into thinking the result sounds exactly like a field recording, but you would certainly let it pass for Foley in film. (I think that fits, actually – film uses constructed Foley partly because we expect in that context for the sounds to be constructed, more the way we imagine we hear than what literally passes into our ears.)

You wouldn’t think this was internal Ableton devices – not by a longshot – but of course it is.

And that’s where Scapes is doubly useful. Whether or not you want to create these particular sounds, every layer is a master class in sound design and synthesis. If you can understand a cricket, a bottle rocket, a rainstorm, and a car alarm, then you’re closer not only to emulating reality, but to being able to reconstruct the sounds you hear in your imagination and that you remember from life. That opens up new galaxies of potential to composers and musicians.

It might be just what electronic music needs: to think of sound creatively, rather than trying to regurgitate some instrumentation you’ve heard before. This might be the opposite of how you normally think of presets: here, presets can liberate you from repetitive thought.

I’ve seen this idea before – but just once before, that I can think of. Andy Farnell’s Designing Sound, which began life as a PDF that was floating around in draft form before it matured into a book at MIT Press, took on exactly this idea. Fran’s scapes are “tracks,” collaged compositions that turn into entire environments; Farnell looks only at the component sounds one by one.

Otherwise, the two have the same philosophy: understand the way you hear sound by starting from scratch and building up something that sounds natural. Scapes does it with Ableton Live projects you can easily walk through. Designing Sound demonstrates this on paper with patches in the free and open source environment Pure Data. As Richard Boulanger describes that book, “with hundreds of fully working sound models, this ‘living document’ helps students to learn with both their eyes and their ears, and to explore what they are learning on their own computer.”

But yes – create sounds by really listening, actively. (Pauline Oliveros might have been into this.)

Designing Sound | The MIT Press

Sound examples

A PDF introducing Pure Data (the free software you can use to pull this off)

But grabbing Scapes and a PDF or paper edition of Designing Sound together would give you a pairing you could play with more or less for the rest of your life.

Scapes is free (only Ableton Live required), and available now.

For background on how this came about: THE ORIGIN OF SCAPES [TL;DR EDIT]

The post What if you used synthesizers to emulate nature and reality? appeared first on CDM Create Digital Music.

by Peter Kirn at August 23, 2017 10:10 PM

August 22, 2017

Vee One Suite 0.8.4 - A Late-Summer'17 release


The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, welcomes a brand new and fourth member, padthv1 as a polyphonic additive synthesizer, now joining the late-summer'17 release party.

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

And now being the gang-of-four!


synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.4 (late-summer'17) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.



  • Disabled "Custom style theme" option on LV2 plug-in form.
  • Brand new LFO Balance parameter introduced.



git repos:

Flattr this


samplv1 - an old-school polyphonic sampler

samplv1 0.8.4 (late-summer'17) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.



  • Disabled "Custom style theme" option on LV2 plug-in form.



git repos:

Flattr this


drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.4 (late-summer'17) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.



  • Disabled "Custom style theme" option on LV2 plug-in form.



git repos:

Flattr this


padthv1 - an old-school polyphonic additive synthesizer

padthv1 0.8.4 (late-summer'17) is out! (NEW!)

padthv1 is an old-school polyphonic additive synthesizer with stereo fx

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.



  • First public release.



git repos:

Flattr this


Enjoy && have fun ;)

by rncbc at August 22, 2017 05:00 PM

digital audio hacks – Hackaday

ESP8266 Based Internet Radio Receiver is Packed with Features

Have a beautiful antique radio that’s beyond repair? This ESP8266 based Internet radio by [Edzelf] would be an excellent starting point to get it running again, as an alternative to a Raspberry-Pi based design. The basic premise is straightforward: an ESP8266 handles the connection to an Internet radio station of your choice, and a VS1053 codec module decodes the stream to produce an audio signal (which will require some form of amplification afterwards).

Besides the excellent documentation (PDF warning), where this firmware really shines is the sheer number of features that have been added. It includes a web interface that allows you to select an arbitrary station as well as cycle through presets, adjust volume, bass, and treble.


If you prefer physical controls, it supports buttons and dials. If you’re in the mood for something more Internet of Things, it can be controlled by the MQTT protocol as well. It even supports a color TFT screen by default, although this reduces the number of pins that can be used for button input.

The firmware also supports playing arbitrary .mp3 files hosted on a server. Given the low parts count and the wealth of options for controlling the device, we could see this device making its way into doorbells, practical jokes, and small museum exhibits.

To see it in action, check out the video below:

[Thanks JeeCee]

Filed under: digital audio hacks, Radio Hacks

by Sean Boyce at August 22, 2017 03:30 PM

August 19, 2017

open-source – CDM Create Digital Music

Here are some of our favorite MeeBlip triode synth jams

We say “play” music for a reason – synths are meant to be fun. So here are our favorite live jams from the MeeBlip community, with our triode synth.

And, of course, whether you’re a beginner or more advanced, this can give you some inspiration for how to set up a live rig – or give you some idea of what triode sounds like if you don’t know already. We picked just a few of our favorites, but if we missed you, let us know! (audio or video welcome!)

First, Olivier Ozoux has churned out some amazing jam sessions with the triode, from unboxing to studio. (He also disassembled our fully-assembled unit to show the innards.)

The amazing Gustavo Bravetti is always full of virtuosity playing live; here, that distinctive triode sound cuts through a table full of gear. Details:

Again ARTURIA’s Beat Step Pro in charge of randomness (accessory percussions and subtle TB303). Practically all sounds generated on the black boxes, thanks Elektron, and at last but no least MeeBlip’s [triode] as supporting melody synth. Advanced controls from Push and Launch Control using Performer , made with Max by Cycling ’74.

Here’s a triode with the Elektron Octatrack as sequencer, plus a Moog Minitaur and Elektron Analog RYTM. That user also walks through the wavetable sounds packed into the triode for extra sonic variety.

Novation’s Circuit and MeeBlip triode pair for an incredible, low power, low cost, ultra-portable, all-in-one rig. We get not one but two examples of that combo, thanks to Pete Mitchell Music and Ken Shorley. It’s like peanut butter and chocolate:

One nice thing about triode is, that sub oscillator can fatten up and round out the one oscillator of a 303. We teamed up with Roland’s Nick de Friez when the lovely little TB-03 came out to show how these two can work together. Just output the distinctive 303-style sequencer to triode’s MIDI in, and have some fun:

Here’s triode as the heart of a rig with KORG’s volca series (percussion) and Roland’s TB-03 (acid bass) – adding some extra bottom. Thank you, Steven Archer, for your hopeful machines:

Get yours:

The post Here are some of our favorite MeeBlip triode synth jams appeared first on CDM Create Digital Music.

by Peter Kirn at August 19, 2017 02:09 PM

August 16, 2017


Ardour 5.11 released

We are pleased to announce the availability of Ardour 5.11. Like 5.10, this is primarily a bug-fix release, though it also includes VCA automation graphical editing, a new template management dialog and various other useful new features.


Read more below for the full list of features, improvements and fixes.

read more

by paul at August 16, 2017 06:32 PM

digital audio hacks – Hackaday

The Best Stereo Valve Amp In The World

There are few greater follies in the world of electronics than that of an electronic engineering student who has just discovered the world of hi-fi audio. I was once that electronic engineering student and here follows a tale of one of my follies. One that incidentally taught me a lot about my craft, and I am thankful to say at least did not cost me much money.

Construction more suited to 1962 than 1992.

It must have been some time in the winter of 1991/92, and being immersed in student radio and sound-and-light I was party to an intense hi-fi arms race among the similarly afflicted. Some of my friends had rich parents or jobs on the side and could thus afford shiny amplifiers and the like, but I had neither of those and an elderly Mini to support. My only option therefore was to get creative and build my own. And since the ultimate object of audio desire a quarter century ago was a valve (tube) amp, that was what I decided to tackle.

Nowadays, building a valve amp is a surprisingly straightforward process, as there are many online suppliers who will sell you a kit of parts from the other side of the world. Transformer manufacturers produce readily available products for your HT supply and your audio output matching, so to a certain extent your choice of amp is simply a case of picking your preferred circuit and assembling it. Back then however the world of electronics had extricated itself from the world of valves a couple of decades earlier, so getting your hands on the components was something of a challenge. I cut out the power supply by using a scrap Dymar Electronics instrument enclosure which had built-in HT and heater rails ready to go, but the choice of transformers and high-voltage capacitors was something of a challenge.

Pulling the amplifier out of storage in 2017, I’m going in blind. I remember roughly what I did, but the details have been obscured by decades of other concerns. So in an odd meeting with my barely-adult self, it’s time to take a look at what I made. Where did I get it right, and just how badly did I get it wrong?

Lovingly hand-drawn from life, missing the PSU components.

The amp itself sits in the removable portion of the Dymar chassis, I can’t remember what the dead instrument was, but Dymar produced a range of instruments as modules for a backplane. The front panel is a piece of sheet steel I cut myself, and is still painted in British Leyland Champagne Beige, the colour of that elderly Mini. It has a volume control, a DIN input socket which must have seemed cool to only me in 1992, and a Post Office Telephones terminal block for the speakers. Inside the chassis the amp is mounted on a piece of aluminium sheet, on top a pair of PCL86 triode/pentode valves, a pair of output transformers and a supply smoothing capacitor, and underneath all the smaller components on tag strips. Though I say it myself, it’s a tidier job than I remember.

1969’s hot new device, already obsolete by 1980.

The circuit is simple enough, a single-ended Class A audio amplifier that I lifted along with the PCL86 and the original output transformers, from a commonly available (at the time) scrap ITT TV set. These triode/pentodes were the integrated amplifier device of their day, as ubiquitous as an LM386 in later decades, containing a triode as preamplifier and a power output pentode, and capable of delivering a few watts of audio at reasonable quality with very few external components. They were also dirt cheap, the “P” signifying a 300mA series heater chain as used in TV sets that was considerably less desirable than the “E” versions which had the standard 6.3V heaters. Not a problem for me, as the Dymar PSU had a 12V rail that could happily give almost the 300mA each to a couple of PCL86s.

My choice of parts must have been limited to those my university’s RS trade counter had in stock that had the required working voltage, and are a mixed bag that you wouldn’t remotely class as audio grade. There are a couple of enormous 450V 33μF electrolytics, and 250VAC Class Y 0.1μF polymer capacitors intended for use in power supply filters. I seem to have followed the idea of using a small and a large capacitor in parallel, probably for some youthful hi-fi mumbo-jumbo idea about frequency response. Otherwise the resistors look like carbon film components, something that probably made more sense to me in the early 1990s than it does now.

On top of the chassis, the original transformers taken from scrap TV sets turned out to be of such low quality that they tended to “sing” at any kind of volume, so I shelled out on a pair of the only valve audio output transformers I could find at the time, something that must have been a relic of a bygone era in the RS catalogue. The original valves were a pair of PCL86s from old TVs, but I replaced them with a “matched” pair of brand new PCL86s. I remember these cost me 50p (about 90¢ in ’92) each at a radio rally, and were made in Yugoslavia with a date code of January 1980. The new valves didn’t make any difference, but they made me feel better.

How did this amplifier perform, and what did I learn from it?

Under the hood, and it’s all a bit messy.

In the first instance, it performed 110%, because I had a valve amp and nobody else did. The air of mystique surrounding this rarest of audio devices neatly sidestepped the fact that it wasn’t the best of valve amps, but that didn’t matter. Being a class A amplifier with new components, it came to the party with the lowest theoretical distortion it could have had due to its circuit topology. Another area of shameless bragging rights for my younger self, but in reality all it meant was that it got hot.

The sound at first power-on was crisp and sibilant, but with an obvious frequency response problem, it was bass-to-mid heavy, and not in a good way. Here was my first learning opportunity, I had just received an object lesson in real audio transformers not behaving like theoretical audio transformers. It had an impressive impulse response though, square waves came through it beautifully square on my battered old ‘scope.

I could only go so far listening to a hi-fi that might have been a little fi but certainly wasn’t hi. My attention turned to that frequency response problem, and since we’d just been through the series of lectures that dealt with negative feedback I considered myself an expert in such matters who could fix it with ease. I cured the frequency response hump with a feedback resistor from output to input, playing around with values until I lit upon 330K as about right.

The Best Stereo Valve Amp In The World. Yeah, right.

Here was my second learning experience. I’d made a pretty reasonable amplifier as it happens, and it sounded rather good through my junk-shop Wharfedale Linton speakers with cheap Maplin bass drivers. I could indulge my then-held taste in tedious rock music, and pretend that I’d reached a state of hi-fi Higher Being. But of course, I hadn’t. I’d got my flat frequency response, but I’d shot my phase response to hell, and thus my impulse response had all the timing of a British Rail local stopping service. The ‘scope showed square waves would eventually get there, but oh boy did they take their time. The sound had an indefinable wooliness to it, it was clear as a bell but the sibilance had gone. I came away knowing more about the complex and unexpected effects of audio circuitry than I ever expected to, and with an amp that still had some bragging  rights, but not as the audio genius I had hoped I might be.

The amplifier saw me through my days as a student, and into my first couple of years in the wider world. Eventually the capacitor failed in the Dymar PSU, and I bought a Cambridge Audio amp that has served me ever since. The valve amp has sat forlornly on the shelf, a reminder of a past glory that maybe one day I’ll resuscitate. Perhaps I’ll give it a DSP board programmed to cure its faults. Fortunately I have other projects from my student days that have better stood the test of time.

So. There’s my youthful folly, and what I learned from it. How about you, are there any projects from your past that seemed a much better idea at the time than they do now?

Filed under: classic hacks, digital audio hacks, Hackaday Columns, Interest, Original Art

by Jenny List at August 16, 2017 05:01 PM

August 12, 2017

Libre Music Production - Articles, Tutorials and News

FLOSS music convention in Germany in November

On the 4th and 5th of November 2017 you can attend the Sonoj Convention in Cologne, Germany. Admission is free. You will be able to enjoy demonstrations, talks and workshops about music production through open source software. Hands-on tutorials and workflow presentations can be expected. The Sonoj Convention is a great opportunity to meet like-minded people, maybe even to have engaging discussions! Every man and woman is welcome, no matter your musical or technological background.

by admin at August 12, 2017 08:25 PM

August 11, 2017

digital audio hacks – Hackaday

We Should Stop Here, It’s Bat Country!

[Roland Meertens] has a bat detector, or rather, he has a device that can record ultrasound – the type of sound that bats use to echolocate. What he wants is a bat detector. When he discovered bats living behind his house, he set to work creating a program that would use his recorder to detect when bats were around.

[Roland]’s workflow consists of breaking up a recording from his backyard into one second clips, loading them in to a Python program and running some machine learning code to determine whether the clip is a recording of a bat or not and using this to determine the number of bats flying around. He uses several Python libraries to do this including Tensorflow and LibROSA.

The Python code breaks each one second clip into twenty-two parts. For each part, he determines the max, min, mean, standard deviation, and max-min of the sample – if multiple parts of the signal have certain features (such as a high standard deviation), then the software has detected a bat call. Armed with this, [Roland] turned his head to the machine learning so that he could offload the work of detecting the bats. Again, he turned to Python and the Keras library.

With a 95% success rate, [Roland] now has a bat detector! One that works pretty well, too. For more on detecting bats and machine learning, check out the bat detector in this list of ultrasonic projects and check out this IDE for working with Tensorflow and machine learning.

Filed under: digital audio hacks

by Rich Hawkes at August 11, 2017 05:00 AM

August 03, 2017

open-source – CDM Create Digital Music

Export to hardware, virtual pedals – this could be the future of effects

If your computer and a stompbox had a love child, MOD Duo would be it – a virtual effects environment that can load anything. And now, it does Max/MSP, too.

MOD Devices’ MOD Duo began its life as a Kickstarter campaign. The idea – turn computer software into a robust piece of hardware – wasn’t itself so new. Past dedicated audio computer efforts have come and gone. But it is genuinely possible in this industry to succeed where others have failed, by getting your timing right, and executing better. And the MOD Duo is starting to look like it does just that.

What the MOD Duo gives you is essentially a virtualized pedalboard where you can add effects at will. Set up the effects you want on your computer screen (in a Web browser), and even add new ones by shopping for sounds in a store. But then, get the reliability and physical form factor of hardware, by uploading them to the MOD Duo hardware. You can add additional footswitches and pedals if you want additional control.

Watch how that works:

For end users, it can stop there. But DIYers can go deeper with this as an open box. Under the hood, it’s running LV2 plug-ins, an open, Linux-centered plug-in format. If you’re a developer, you can create your own effects. If you like tinkering with hardware, you can build your own controllers, using an Arduino shield they made especially for the job.

And then, this week, the folks at Cycling ’74 take us on a special tour of integration with Max/MSP. It represents something many software patchers have dreamed of for a long time. In short, you can “export” your patches to the hardware, and run them standalone without your computer.

This says a lot about the future, beyond just the MOD Duo. The technology that allows Max/MSP to support the MOD Duo is gen~ code, a more platform-agnostic, portable core inside Max. This hints at a future when Max runs in all sorts of places – not just mobile, but other hardware, too. And that future was of interest both to Cycling ’74 and the CEO of Ableton, as revealed in our interview with the two of them.

Even broader than that, though, this could be a way of looking at what electronic music looks like after the computer. A lot of people assume that ditching laptops means going backwards. And sure enough, there has been a renewed interest in instruments and interfaces that recall tech from the 70s and 80s. That’s great, but – it doesn’t have to stop there.

The truth is, form factors and physical interactions that worked well on dedicated hardware may start to have more of the openness, flexibility, intelligence, and broad sonic canvas that computers did. It means, basically, it’s not that you’re ditching your computer for a modular, a stompbox, or a keyboard. It’s that those things start to act more like your computer.

Anyway, why wait for that to happen? Here’s one way it can happen now.

Darwin Grosse has a great walk-through of the MOD Duo and how it works, followed by how to get started with

The MOD Duo Ecosystem (an introduction to the MOD Duo)

Content You Need: The MOD Duo Package (into how to work with Max)

An alternative: the very affordable OWL Pedal is similar in function, minus that slick browser interface. It can load Max gen~ code, too:

New Tutorials including Max MSP on the OWL!

Pd users, that works, too – via Heavy (I think on the MOD, as well):

OWL & Heavy – a Pd patch on the OWL

The post Export to hardware, virtual pedals – this could be the future of effects appeared first on CDM Create Digital Music.

by Peter Kirn at August 03, 2017 01:07 PM

August 02, 2017

Libre Music Production - Articles, Tutorials and News

MOD Duo and Max/MSP integration

MOD Duo and Max/MSP integration

Max/MSP users can now easily convert their Gen objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!

by yassinphilip at August 02, 2017 10:21 AM

August 01, 2017

MOD Devices Blog

NEW! MOD Duo and Max/MSP integration!

Max/MSP users can now easily convert their gen~ objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!


More power to performing digital musicians

There’s no shortage of signal processing environments available to musicians who want to manipulate digital audio. Their use has spread to homes, studios and even stages everywhere. We’ve all seen this revolution take place, with computers popping up at concerts and the advent of laptop music performance. But a computer is not an instrument and a musician shouldn’t become a mere button pusher or mouse handler.

That’s where we come in. The MOD Duo is a computing platform for performing musicians, it’s a computer in a box, optimised to process audio during live performances. And since our creative platform is based on an open format, it can be useful to scores of artists and developers.

The Max/MSP software is one of the greatest and most powerful tools in this field and it has become one of the most used visual programming languages for music and multimedia since its inception in the 1980s. For months now, we’ve been collaborating with Cycling’74, the developers and maintainers of Max/MSP, in order to provide a new stage experience for their users and encourage developers to port their patches and objects into the MOD Duo plugin store.

We’ve come up with a Max package that takes the code exported from Max/MSP gen~ objects and takes care of compiling an LV2 plugin and putting it into the Duo. The whole idea is to simplify the process of having Max/MSP patches and turning them into plugins that can be used on stage without the burden of the computer and with the added controllability provided by the Duo.

“Wait a minute… I’m confused. What is Gen?”

If you’re not familiar with Max or have never heard of Gen, here’s an overview, courtesy of our friends over at Cycling’74:

“Gen is a new approach to the relationship between patchers and code. The patcher is the traditional Max environment – a graphical interface for linking bits of functionality together. With embedded scripting such as the js object, text-based coding became an important part of working with Max as it was no longer confined to simply writing Max externals in C. Scripting however still didn’t alter the logic of the Max patcher in any fundamental way because the boundary between patcher and code was still the object box. Gen represents a fundamental change in that relationship. The Gen patcher is a new kind of Max patcher where Gen technology is accessed.”

If you are an aficionado and were just waiting for this kind of solution to appear, we’ve come up with documentation to make the process of getting your Gen-based plugins to the MOD Duo as effortless as possible, with a wiki entry and a tutorial that shows you how to create your own plugins.

Max MOD Devices MOD Duo package

You can export you gen~ code straight from Max, with the new MODDuo Package

Why is it cool to have this integration?

This is no small feat.

We’re significantly speeding up the learning curve for adding personalized plugins to the Duo and also allowing digital musicians to take their Max/MSP objects to the stage without a computer. These new plugins will be fully compatible with the 200+ ones that are already available, allowing the creation of elaborate audio chains.

Right now, after being added to the users’ machines, these new plugins can be posted to the forum and we will publish them manually on the plugin store (we’re working on automating this process). Soon, when our commercial plugin store is setup and ready to go, Max/MSP wizards (and all of the MOD community) will be able to provide their creations for a fee, creating a new business in the process, but also promoting the development of more sophisticated audio apps by programmers. Until the commercial store arrives, demo version of these plugins can be published anyway.

In the future, we’ll keep adding new integrations and documentation for other languages and protocols such as Pure Data, Faust and OSC. Creating plugins for the Duo will be within everyone’s reach.

What are the current plugins that come from Max/MSP gen~ objects?

It all started a while ago, with the official gen plugin export project that Cycling’74 created for building audio applications and plugins. Our software developer, the legendary falkTX, then started doing an implementation of that focused on LV2 and Linux, which he inserted in his own open-source project that provides Cross-Platform Audio Plugins called DISTRHO.

At that time, he and our intern Nino de Wit began to run some tests and develop plugins from gen~code. From this effort, the initial project was born. Shortly afterwards, Nino began developing his own, more complex plugins. As Cycling’74 became aware of this, they contacted us and we decided to make a seamless integration between both platforms.

Here are the plugins derived from Max/MSP gen~ objects, conceived during Nino’s internship at MOD HQ in Berlin. These little gems have been making many MOD users happy since they came around. Here’s a glimpse at the type of plugin this integration will enable users to create:

Plugin screenshot




Shiroverb is a shimmer-reverb based on the “Gigaverb”-genpatch, ported from the implementation by Juhana Sadeharju, and the “Pitch-Shift”-genpatch, both in Max MSP.




Plugin screenshot





Modulay is an analog-style delay with variable types of modulation based on the setting of the morph control. All the way counterclockwise is chorus, 12 o’clock is vibrato, and all the way clockwise is flanger. With every setting in between morphing from one effect to the other.



Plugin screenshot





Larynx is a simple sine-modulated vibrato with a tone control.




Plugin screenshot






Harmless is a wave-shapeable harmonic tremolo with a stereo phase control.






Pedalboards section!

You can check out and listen to the Shiro plugins in action in these sweet pedalboards that our community has created and shared (and load them into your Duo at the click of a button):

Swell Boost:

Swell Boost Pedalboard

Everything a multi-layered guitarist needs in their arsenal: a succulent and smooth shimmer swell pad on one path, and a shrieking shrill boost on another path that cuts through the mix like a Japanese ginsu knife! The best part is that there’s a 4-way toggle switch at the start of the chain that allows the source signal to constantly flow, but the 1st and 4th switches toggle either the pad or the boost (or BOTH if you want the NUCLEAR option!). This allows the guitarist to cut the pad or boost, but results in the pad’s trails to remain in the mix as the source signal never changes.



Shimmer machine pedalboard shared

 Using the Harmless plugin combined with the Larynx on a Novation Circuit.


Harmless JCM:

Guitarix JCM-800 and the Shiro Harmless modulator. Such a beautiful sound! Add a little looper and you’re good to go 🙂


Soap Bubbles:

Soap bubble oleiade pedalboard Psychedelic sound based on Larynx, Chorus and some Panning.


Modulay Madness:

Modulay Gianfranco Pedalboard

Enjoy the Modulay in a simple guitar setup.


201b Pad Shiroverb:

Steve Lawson Pedalboard ShiroverbHuge pad sound with a parallel path for melody, played on a bass.



Kalimba Jam Session Startup Garden Wallifornia Music Tech Pedalboard

Pedalboard used during the Startup Garden at Wallifornia Music Tech. We wanted to show visitors that you can also use the MOD Duo with acoustic instruments and created this nice pad using a synth, a sequencer and a kalimba with a pickup for some solo play. Listen to that tremolo!


Makeshift Pitchshift:

Pitchshift pedalboard Steve LawsonUsing the ‘shimmer’ in the Shiroverb as a pitch-shifter on the bass.


We want to know if you are as thrilled as we are with this integration. Do you look forward to creating your own plugins from Max/MSP? Are you excited about the commercial store? Share your thoughts in the comments below!


PS: Special Offer

If you buy a MOD Duo before September 30th 2017, you get Max7 for 9 months COMPLETELY FREE.

If you are already a MOD user, you can get Max7 for 9 months for free as well by completing the Great Book of Pedalboards form.

by Mauricio Dwek at August 01, 2017 08:49 AM

July 31, 2017

open-source – CDM Create Digital Music

The Viktor NV-1 is a powerful synth running in your browser

Its name is Viktor, and it’s a synth you can play with for free in a browser – with a mouse, or finger, or keyboard, or even MIDI.

Not news, but – heck of a lot of fun to play with.

Now part of a growing number of Web Audio (and even Web MIDI) synths, the Viktor NV-1 is a surprisingly powerful diversion. You get three oscillators, two envelopes (one for amplitude, one for filter), a filter, LFO, reverb, delay, compressor, and loads of controls.

Because it lives in a browser, it’s also easy to save and share presets with others. So, for instance, here you go:

The developer also has a lovely explanation of how this works:

It’s Built on-top of the Web Audio API (WAA). The WAA is very nicely organized and easy to use. Basically it provides a variety of NodeTypes (responsible for sound generation, editing or analysis) which you combine in your liking, creating a graph through which your sound is being shaped.

Also worth noting – how it was built:

Web Audio API, Web MIDI API, Local Storage (through npm module “store”). For the effects section I used Tuna.js.

AngularJS, webaudio-controls (I am regretting this decision, since these controls are full of bugs and had to fix several of them before releasing), Bootstrap, Font Awesome, Font Orbitron and Stylus is what I used for the UI.

Instead of using Angular alone, for dependency management, I use Browserify, which provides the nice CommonJS format/style of module creation and requiring.

Angular isn’t very Browserify-friendly so I had to do some stitching in my initial setup (browserify-shim, browserify-ng-html2js etc.) but once the setup was ready development really felt a breeze.

Grunt and multiple grunt-contrib-‘s are used for the build (and development rebuild).

I drew the images on Pixelmator.

Try it:

Or grab the code (fully open source):

The browser synth is the work of Nikolay Tsenkov.

The post The Viktor NV-1 is a powerful synth running in your browser appeared first on CDM Create Digital Music.

by Peter Kirn at July 31, 2017 11:47 PM


new exclusive Notstandskomitee track released

The new Notstandskomitee track Ungetuem can be found exclusive on this compilation by Silent Method Records, currently as download but soon also on vinyl and cassette.

by herrsteiner ( at July 31, 2017 02:29 PM

July 29, 2017

digital audio hacks – Hackaday

Bessel Filter Design

Once you fall deep enough into the rabbit hole of any project, specific information starts getting harder and harder to find. At some point, trusting experts becomes necessary, even if that information is hard to find, obtuse, or incomplete. [turingbirds] was having this problem with Bessel filters, namely that all of the information about them was scattered around the web and in textbooks. For anyone else who is having trouble with these particular filters, or simply wants to learn more about them, [turingbirds] has put together a guide with all of the information he has about them.

For those who don’t design audio circuits full-time, a Bessel filter is a linear, passive bandpass filter that preserves waveshapes of signals that are within the range of the filter’s pass bands, rather than distorting them in some way. [turingbirds]’s guide goes into the foundations of where the filter coefficients come from, instead of blindly using lookup tables like he had been doing.

For anyone else who uses these filters often, this design guide looks to be a helpful tool. Of course, if you’re new to the world of electronic filters there’s no reason to be afraid of them. You can even get started with everyone’s favorite: an Arduino.

Filed under: digital audio hacks

by Bryan Cockfield at July 29, 2017 05:00 PM

July 26, 2017

MOD Devices Blog

Top 5 Greatest Things About Our Time at Wallifornia Music Tech

We were at Wallifornia Music Tech during Les Ardentes festival in Liège and it was a memorable week. Here’s a short account of our adventures.


Greetings MOD Community,


There’s a lot going on and the next weeks will be full of unveilings, but we had to take some time to share with you some of the brilliant moments we had earlier this month at Wallifornia Music Tech, during the Les Ardentes music festival in Liège, Belgium.

These are the 5 greatest things that happened during the Startup Acceleration Program, the Wallifornia Music Tech hackathon and the Startup Program, and some of the concerts we attended.


5 – Spending Time in the Lovely Liège


I had been to Liège once, a couple of years ago, and spent the whole time at the university for a conference. The weather was not good and I didn’t get to see much of the city. This time, however, the weather was surprisingly warm and we went out to see some of the sites and enjoy what the town has to offer. We stayed at a quaint little place at Rue Pierreuse, in an artsy neighbourhood on top of a hill.

In general, Belgian people were just incredibly friendly and thoughtful, making sure that we had everything we needed at all times and always proud to show us the hidden gems in their city. In this sense, a special acknowledgement must go to the team from Leansquare – Alice, Clémentine, Gérôme, Roald and Ben, in particular – who were responsible for the excellent organisation of the Startup Acceleration Program. They have an amazing co-working space in the heart of the city and took care of every little detail like a well-tuned machine and with a constant smile.

wallifornia music tech family startup acceleration

Everyone is happy in Belgium.

Also, Les Ardentes music festival in itself was a spectacular event, in a wonderful location by the river, with an awesome lineup mixing nostalgic headliners, up-and-coming favourites and fresh new acts (more on that later!). The logistics and infrastructure were super well handled for such a big festivity and we managed to enjoy some nice concerts along the way.


4 – Seeing Some Sweet Hackathon Action

We were partners and sponsors of the hackathon during the Wallifornia Music Tech Living Lab and provided some MOD Duos and our API for the hackers to use in their projects. The hackathon was masterfully organised and conducted by Luann Williams and Travis Laurendine, who are, among other things, the people responsible for the SXSW hackathon.

They did a great job motivating the teams and guaranteeing a smooth sail for the tens of hard-at-work and exhausted hackers.

Travis Laurendine Luann Williams Hackathon Wallifornia Music Tech Les Ardentes

Travis and Luann counting the jury’s votes for best hack.

During this hackathon, we met two amazing lords of bits, bytes and bobs, Tom Brückner and Jean-Michel Dewez, who decided to include a little bit of MOD in their hacks. Tom made a web app that provided information on a given song based on Musimap‘s artificial intelligence API. He used data from our pedalboard feed API in order to propose the corresponding pedalboard and ended up as second runner-up.

Jean-Michel, aka Chantal Goret, an 8-bit virtuoso, wanted to use the Duo with Beatmotor, his crafted MIDI controller and instrument. It was built using an old cigar box, an Arduino board, some knobs, buttons and an ultrasound sensor. He used a teensy board to send MIDI notes to the Duo. For this superb retro hack, he won first prize!

Jean-Michel Dewez Chantal Goret Mauricio Dwek Hackathon Wallifornia Music Tech

Chatting about 8-bit music hacks and the Duo with Jean-Michel, winner of the hackathon, with his cigar-case 8-bit MIDI instrument/controller.


3 – Sharing an Intense Week With Eight Fantastic Startups

We spent the whole week with an outstanding group of startuppers from all over the world. There was so much creativity flowing in these intensive training sessions that we all came out fueled with ideas and benefitted from our shared experiences.

I’ll try to summarise all their projects because you should definitely keep an eye out for these gals and guys:

  • Beatlinks: It’s a whole living Musiverse in a game that teaches DJ skills to kids  and an animation.
  • Big Boy Systems: The first recording system that unites binaural sound and a 3D camera in order to create the ultimate immersive experience.
  • Paperchain: They provide data services for the music industry, from the collection and organisation of rights information to the identification of unclaimed royalties.
  • Roadie: An app that uses an AI to help bands with tour schedules, based on data from streaming services and social media.
  • Sofasession: They have developed an app for online music collaboration and another that connects music students with music schools.
  • Soundbops: A toy that teaches the fundamentals of music theory to young children. Their Kickstarter is coming out soon – stay tuned!
  • Warm: A huge real-time radio monitor that allows musicians to find out where their songs are being played.
  • WIP Music: The so-called Tinder for Music. An app that connects musicians to their audience and the venues that can host them.


2 – Meeting Trombone Shorty and His Band Backstage

Thanks to our new friend Travis Laurendine, aka Roi Lion d’Orléans, aka Ideas Gardener, we went backstage to meet Trombone Shorty and his band after their concert.

First, a brief word about their performance. It’s been awhile since I’ve seen such energy on stage and there were several mind-blowing moments when I sort of lost it. The whole band is an example of groove, joy and technique.

We met them all: guitarist Pete Murano, drummer Joey Peebles, bass player Mike Bass-Bailey, tenor sax BK Jackson, baritone sax Dan Oestreicher and the man himself, Troy “Trombone Shorty” Andrews. Travis had sent them a video of the previous day at the hackathon with a short demo of the device and they had gone looking on the website.

Suffice to say, they wanted a Duo. Dan even knew about the MOD Duo from before and is now preparing some demos for us. He plays baritone sax but also has a one-man band so we’re very excited.

Trombone Shorty Dan Oestreicher BK Jackson Mike Bass-Bailey Pete Murano Joey Peebles Les Ardentes Backstage MOD Duo

Magical moment: Dan Oestreicher (baritone sax, centre left) and Mike Bass-Bailey (bass. centre right) from Trombone Shorty’s band after the concert with their new companion Duo and footswitch extension.


1 – Winning the Startup Acceleration Program

We spent the whole week learning and gathering input from a tremendous team of coaches and experts. We were expected to hone our pitches and enthral a jury of investors and affluent music business advisors.

We all worked very hard to perfect our presentations and find a way to squeeze every last bit of information in under 7-minutes. Gianfranco was selected as the first speaker and gave it all.

You can see his pitch for the MOD Duo below:

He was asked questions from industry experts such as Rishi Patel, Virgine Berger and Ted Cohen, and later sat down to meet with them and other investors.

In the end, we were honoured to take home the title of best startup of the Accelerate & Invest program, which coronated a great week and we hope is a presage of even greater things to come.

Ted Cohen Virginier Berger Gianfranco Ceccolini Mauricio Dwek Prize Startup Acceleration Wallifornia Music Tech

Receiving a sizable check from Armonia CEO Virginie Berger and music industry legend Ted Cohen


Honourable mentions: the Ramen at the restaurant next to Leansquare and the Boulets avec Frites, the jam sessions we held with our Dutch acolytes Pjotr Lasschuit and Jesse Verhage at our booth during the Startup Garden using kalimbas, Novation Circuits, synths and a wide assortment of controllers, meeting Belgian geniuses Hermutt Lobby, La Femme’s retro-punk concert…


by Mauricio Dwek at July 26, 2017 09:33 PM

July 25, 2017

digital audio hacks – Hackaday

Designing the Atom Smasher Guitar Pedal

[Alex Lynham] has been creating digital guitar pedals for a while and after releasing the Atom Smasher, a glitchy lo-fi digital delay pedal, he had people start asking him how he designed digital effects pedals rather than analog effects. In fact, he had enough interest, that he wrote an article on it.

The article starts with some background on [Alex], the pedals he’s built and why he chose not to work on pedals full-time. Eventually, the article gets to the how [Alex] designed the Atom Smasher. He starts by describing the chip he used, the same one that many hobbyists, as well as commercial builders, use for delay based effects – the SpinSemi FV-1.

The FV-1 is a SMD chip used for digital delays and other effects that require a delay line – reverbs, choruses, flangers, etc. It’s programmed with an assembly-style language called SpinASM. [Alex] goes over some of the tools and references he used when designing for the pedal. He also has a list of tips for would-be effect pedal designers which work whether you’re designing digital or analogue effects.

[Alex] ends his article saying that, in the future, he might make the schematic and code available, but for the moment he’s not. The FV-1 is an interesting chip, and [Alex]’s article gives a nice high-level look at its features and how to develop for it. For some interesting guitar pedal related articles, check out this one using effects pedals to get better audio in your car, and here’s one about playing with DSP and designing a pedal with it.

Filed under: digital audio hacks, musical hacks

by Rich Hawkes at July 25, 2017 05:00 AM

July 24, 2017

Libre Music Production - Articles, Tutorials and News

LMP Asks #23: An interview with Jacek from ZARAZA

LMP Asks #23: An interview with Jacek from ZARAZA

This time we talk to Jacek from ZARAZA, one of the two members of this experimental/industrial doom/death/sludge metal band.

Where do you live and what do you do for a living?

I (Jacek) currently live in Ecuador, after immigrating here from Canada about 1.5 years ago. Originally I am Polish, immigrated to Canada in 1990 when I was 20.

by admin at July 24, 2017 09:55 PM

July 21, 2017

Linux – CDM Create Digital Music

Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned.

Aphex Twin’s “Vordhosbn” just got a surprising video reveal, showing how the track was made. So let’s revisit trackers and 90s underground music culture.

You’re probably familiar with the term “white label,” but where did that term originate?”? Back in the early days of DJing, DJs were very territorial about their crate digging. Sometimes, in order to avoid rival DJs looking at their decks to ID their selections (this is way before the days of Shazam, remember), DJs would rip off the labels of a particularly rare record, leaving the white label residue with no identifying information.

Similarly, the 90s were an interesting time for music production. With the advent of computer sequencers, music became more complex – and in the wild west days before YouTube tutorials, concert phone vids, and everyone using Ableton Live, there was legitimate mystery behind how some of the most complex electronic music was made. Max? SuperCollider? Some homebrew software unavailable to the plebs?

If mystery in electronic music production was a game in the 90s, then Richard D. James was its undisputed winner. As Aphex Twin and a host of other pseudonyms, he created mind-bending sequences. As an interview subject, he was equal parts prankster and cagey. Sure, there was an idea of what the IDM greats were up to – Autechre and Plaid used Max, Squarepusher used Reaktor, Aphex used…something? The mystery has always been part of James’ appeal – here is a man who has claimed to sleep only four hours a night, or to have built or heavily modified all of his hardware, or to be sitting on hundreds if not thousands of unreleased tracks, among other tall tales.

Around 2014, something flipped with Richard D. James. After releasing Syro, his first album in 13 years as Aphex Twin, he unleashed the floodgates with a massive hard drive dump onto SoundCloud – seems he wasn’t lying about all those tracks after all. Following up with this, today you can see the debut of a custom Bleep store for Aphex Twin, including loads of unreleased bonus tracks to go with his albums.

Of most interest to the nerds, however, has got to be this seemingly innocuous video, in which we get a trollingly-effected screencast video of Drukqs track “Vordhosbn”, playing out in the vintage tracker PlayerPro. James had previously identified PlayerPro as his main environment for making Drukqs – now we have video of it in action:

So, there we have it. A classic Aphex Twin track with the curtain drawn up. What can we learn from this video? A few things:

  • PlayerPro’s tracks were all monophonic, so the chords in “Vordhosbn” had to be made using multiple tracks
  • As expected with a tracker, it’s largely built from samples – likely from James’ substantial hardware collection
  • Hey, those oscilloscopes and spectral displays are fun

Perhaps what’s best about this video is that it shows an Aphex classic for what it is – a track, composed in much the same way as any other electronic musician might do it. It doesn’t detract from the special qualities of Aphex’s music, but it does show us what was really going on behind all the mystery – music-making.

Keep Track of It

It’s worth spending a moment to celebrate trackers. Long before the days of piano rolls, trackers were the best way to make intricate sequences using a computer. YouTube is riddled with classic jungle tracks from the mid-90s using software like OctaMed:

For a dedicated community, trackers are still the way to go. And there’s no better tracker around now than Renoise – whose developers have done a fantastic job bringing the tracker workflow into the 21st century. Check out this video of Venetian Snares’ “Vache” done in Renoise:

Like most trackers, Renoise has something of a steep learning curve to get all the key commands right; once you’re there, however, you’ll find it to be a very nimble environment for wild micro-edits and crazy sequences. There’s definitely a reason why it remains a tool of choice for breakcore producers!

Do you use a tracker? What do you think of the workflow? What’s the best way for someone to get started with a tracker? Let us know in the comments!

Ed.: PlayerPro is available as free software for Mac, Windows, Linux … and yes, even FreeBSD.

Returning CDM contributor David Abravanel is a marketer, musician, and technologist living in New York. He loves that shiny digital crunch. Follow him at

The post Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned. appeared first on CDM Create Digital Music.

by David Abravanel at July 21, 2017 04:25 PM

July 16, 2017

GStreamer News

Orc 0.4.27 bug-fix release

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • sse: preserve non volatile sse registers, needed for MSVC
  • x86: don't hard-code register size to zero in orc_x86_emit_*() functions
  • Fix incorrect asm generation on 64-bit Windows when building with MSVC
  • Support build using the Meson build system

Direct tarball download: orc-0.4.27.

July 16, 2017 05:00 PM

July 15, 2017

GStreamer News

GStreamer 1.12.2 stable release (binaries)

Pre-built binary images of the 1.12.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 15, 2017 10:20 AM

July 14, 2017

open-source – CDM Create Digital Music

Here’s how to download your own music from SoundCloud, just in case

SoundCloud’s financial turmoil has prompted users to consider, what would happen if the service were switched off? Would you lose some of your own music?

Frankly, we all should have been thinking about that sooner. Clarification: To be very clear: there is no reason you should ever have a file that you care about in just one location, no matter how secure and reliable you imagine that location may be. Key files are best kept in at least one online backup and in at least one locally accessible location (so you can get at it even without a fast connection).

There’s also no reason at this point to think SoundCloud is going to disconnect without warning – or indeed any indication from SoundCloud executives, publicly or privately, that they expect the service is going away. While recent staff cuts were painful for the whole organization, both those who remained and those who left, every suggestion is that the service is going to continue.

SoundCloud publicly has said as much. (Though, sorry – SoundCloud, you really shouldn’t be surprised. Vague messaging, no solid numbers on revenue, and a tendency not to go on record and talk to the press have made apocalyptic leaks the main picture people get of the company. In a week when you cut nearly half your staff and have limited explanation of what your plan is, then yeah, you wind up having to use the Twitter airhorn because people will panic.)

But the question of what’s happening to SoundCloud is immaterial. If you’ve got content that’s on SoundCloud and nowhere else, you’re crazy. This is really more like a wake up call: always, always have redundancy redundancy..

The reality is, with any cloud service, you’re trusting someone else with your data, and your ability to get at that data is dependent on a single login. You might well be the failure point, if you lock yourself out of your own account or if someone else compromises it.

There’s almost never a scenario, then, where it makes sense to have something you care about in just one place, no matter how secure that place is. Redundancy neatly saves you from having to plan for every contingency.

Okay, so … yeah, if you are then nervous about some music you care about being on SoundCloud and aren’t sure if it’s in fact backed up someplace else, you really should go grab it.

Here’s one open source tool (hosted on GitHub, too) that downloads music.

A more generalized tool, for downloading from any site that has links with downloads:

(DownThemAll, the Firefox add-on, also springs to mind.)

Two services offering similar features are hoping they can attract SoundCloud users by helping them migrate their accounts automatically. (I don’t know what the audio fidelity of that copy is, if it includes the original file; I have to test this – and test whether these offerings really boast a significant competitive advantage.)

Could someone create a public mirror of the service? Yes, though – it wouldn’t be cheap. Jason Scott (of Internet Archive fame) tweets that it could cost up to $2 million, based on the amount of data:

(Anybody want to call Martin Shkreli? No?)

My hope is that SoundCloud does survive independently. Any acquisition would likewise be crazy not to maintain users and content; that’s the whole unique value proposition of the service, and there’s still nothing else quite like it. (The fact that there’s nothing quite like it, though, may give you pause on a number of levels.)

My guess is that the number of CDM readers and creators is far from enough to overload a service built to stream to millions of users, so I feel reasonably safe endorsing this use. That said, of course, SoundClouders also read CDM, so they might choose to limit or slow API access. Let’s see.

My advice, though: do grab the stuff you hold dear. Put it on an easily accessible drive. And make sure the media folders on that drive also have an automated backup – I really like cloud backup services like Crashdrive and Backblaze (or, if you have a server, your own scripts). But the best backup plan is one that you set and forget, one you only have to think about when you need it, and one that will be there in that instance.

Let us know if you find a better workflow here.

Thanks to Tom Whitwell of Music thing for raising this and for the above open source tip.

I expect … this may generate some comments. Shoot.

The post Here’s how to download your own music from SoundCloud, just in case appeared first on CDM Create Digital Music.

by Peter Kirn at July 14, 2017 04:06 PM

GStreamer News

GStreamer 1.12.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 14, 2017 11:00 AM

July 11, 2017

MOD Devices Blog

MOD Duo 1.4 Update Now Available

Dearest community,


After several weeks of testing, our latest software update is available!

This one took a bit longer since the testing period largely involved the Beta testing of our first peripheral, the footswitch extension (soon to receive its official name – stay tuned!), and also of the Arduino shield.

As usual, you can upgrade your MOD Duo by clicking on the update icon on the bottom right-hand corner, then on ‘Download’ and finally ‘Upgrade Now’. Wait for a few minutes while the MOD updates itself automatically and enjoy your added features.

Here’s the rundown of release 1.4:


Control Chain

Control Chain is MOD’s custom way of doing external devices. It is an open standard (including hardware, communication protocol, cables and connectors). You can do with Control Chain what the MOD Duo’s hardware actuators are doing right now.

Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colours to indicate a specific state, etc.

And remember: you can daisy chain up to 4 Control Chain peripherals to your MOD Duo!

You can read more about Control Chain here.

Usability Changes

Some small but very handy usability changes were made, following user requests. These include:

  • It’s now possible to MIDI learn using pitchbend
  • You can change parameter ranges without having to re-learn a MIDI CC
  • You can delete the initial/first pedalboard preset (to better organise your “scenes”)
  • And we’ve also reduced the CPU usage with control-output intensive plugins.

Web Interface

  • Plugins now have an information icon on top of them on the builder, that shows their info when clicked (they hide when the screen is too small)

Looper Plugin Info Button GUI Pedalboard

  • The Duo’s own actuators now have the “MOD:” prefix to differentiate them from those of Control Chain devices
  • You can now always close addressing and pedalboard presets dialogues with the “ESC” key, independent of focus

There’s also quite a few more changes and tweaks. Visit our changelog on the wiki to see all changes since v1.3.2.


That’s it! The next upgrade is already being tested, lots of cool new features on the horizon…

Remember: many of these tweaks and new features were added because of your comments on our forum. So, keep making sweet music with your MOD Duos and let us know of any issues or improvements you’d desire!

by Mauricio Dwek at July 11, 2017 05:25 PM

July 10, 2017

GStreamer News

GStreamer Conference 2017 - Call for Papers

This is a formal call for papers (talks) for the GStreamer Conference 2017, which will take place on 21-22 October 2017 in Prague (Czech Republic), just before the Embedded Linux Conference Europe (ELCE).

The GStreamer Conference is a conference for developers, community members, decision-makers, industry partners, and anyone else interested in the GStreamer multimedia framework and open source multimedia.

The call for papers is now open and talk proposals can be submitted.

You can find more details about the conference on the GStreamer Conference 2017 web page.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk.

The deadline for talk submissions is Sunday 13 August 2017, 23:59 UTC.

We hope to see you in Prague!

July 10, 2017 02:00 PM

July 05, 2017


new Notstandskomitee music video

First official video for the new album The Golden Times by Notstandskomitee, made for the track Exhaust. Listen to the album at

by herrsteiner ( at July 05, 2017 03:34 PM

July 04, 2017

fundamental code

Linux & Multi-Screen Touch Screen Setups

While working on the Zyn-Fusion UI I ended up getting a touch screen to help with the testing process. After getting the screen, buying several incorrect HDMI cables, and setting up the screen I found out that the touch events weren’t working as expected. In fact they were often showing up on the wrong screen. If I disabled my primary monitor and only used the touch screen, then events were spot on, so this was only a multi-monitor setup issue.

So, what caused the problem and how can it be fixed?

Well, by default the mouse/touch events which were emitted by the new screen were scaled to the total available area treating multiple screens as a single larger screen. Fortunately X11 provides one solution through xinput. Just running the xinput tool lists out a collection of devices which provide mouse and keyboard events to X11.

mark@cvar:~$ xinput
| Virtual core pointer                          id=2    [master pointer  (3)]
|   > Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
|   > PixArt USB Optical Mouse                  id=8    [slave  pointer  (2)]
|   > ILITEK Multi-Touch-V3004                  id=11   [slave  pointer  (2)]
| Virtual core keyboard                         id=3    [master keyboard (2)]
    > Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
    > Power Button                              id=6    [slave  keyboard (3)]
    > Power Button                              id=7    [slave  keyboard (3)]
    > AT Translated Set 2 keyboard              id=9    [slave  keyboard (3)]
    > Speakup                                   id=10   [slave  keyboard (3)]

In this case the monitor is device 11 which has it’s own set of properties.

mark@cvar:~$ xinput list-props 11
Device 'ILITEK Multi-Touch-V3004':
        Device Enabled (152):   1
        Coordinate Transformation Matrix (154): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
        Device Accel Profile (282):     0
        Device Accel Constant Deceleration (283):       1.000000
        Device Accel Adaptive Deceleration (284):       1.000000
        Device Accel Velocity Scaling (285):    10.000000
        Device Product ID (272):        8746, 136
        Device Node (273):      "/dev/input/event13"
        Evdev Axis Inversion (286):     0, 0
        Evdev Axis Calibration (287):   <no items>
        Evdev Axes Swap (288):  0
        Axis Labels (289):      "Abs MT Position X" (689), "Abs MT Position Y" (690), "None" (0), "None" (0)
        Button Labels (290):    "Button Unknown" (275), "Button Unknown" (275), "Button Unknown" (275), "Button Wheel Up" (158), "Button Wheel Down" (159)
        Evdev Scrolling Distance (291): 0, 0, 0
        Evdev Middle Button Emulation (292):    0
        Evdev Middle Button Timeout (293):      50
        Evdev Third Button Emulation (294):     0
        Evdev Third Button Emulation Timeout (295):     1000
        Evdev Third Button Emulation Button (296):      3
        Evdev Third Button Emulation Threshold (297):   20
        Evdev Wheel Emulation (298):    0
        Evdev Wheel Emulation Axes (299):       0, 0, 4, 5
        Evdev Wheel Emulation Inertia (300):    10
        Evdev Wheel Emulation Timeout (301):    200
        Evdev Wheel Emulation Button (302):     4
        Evdev Drag Lock Buttons (303):  0

Notably xinput provides a property to describe a coordinate transformation which can be used to remap the x and y values of the cursor events. The transformation matrix here is a 3x3 matrix used to transform 2D coordinates and is a fairly common sight in computer graphics. It translates from \((x,y)\) to \((x',y')\) as defined by:

$$ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ h & i & j \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$

The transformation matrix allows for stretching, shearing, translation, flipping, scaling, etc. For the sorts of problems you may see introduced by a multi-monitor setup I would only expect people to care about translating (\(t\)) the events and then re-scaling (\(s\)) them to the offset area. Using these two parameters, the transformation matrix equation is simplified to:

$$ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s_x & 0 & t_x\\ 0 & s_y & s_y\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$

Or without the matrix representation:

$$ \begin{aligned} x' &= s_x x + t_x\\ y' &= s_y y + t_y \end{aligned} $$

With that background out of the way, let’s see how this applied to my specific monitor setup:

2017 monitors

As I mentioned earlier the touch events were scaled to the dimensions of the larger virtual screen. Since the touch screen is larger this means the y axis is mapped correctly and the x axis is mapped for pixels 0..3200 (both screens) instead of pixels 1281..3200 (left screen only). Since the xinput scales theses parameters based upon the total screen size, we can divide by the total x size (3200) to learn that the x axis maps to 0..1 rather than 0.4..1.0. Solving the above equations we can remap the touch events using \(s_x=0.6\) and \(t_x=0.4\). This results in the transformation matrix:

$$ \begin{bmatrix} 0.6 & 0 & 0.4\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} $$

The last step is to provide the new transformation matrix to xinput:

xinput set-prop 11 'Coordinate Transformation Matrix' 0.6 0 0.4 0 1 0 0 0 1

Now cursor events map onto the correct screen accurately and the code to change the xinput properties can be easily put into a shell script.

July 04, 2017 04:00 AM

June 30, 2017

Qtractor 0.8.3 - The Stickiest Tauon is out!


Qtractor 0.8.3 (stickiest tauon) is out!

Changes for this mostly just a bug-fix beta release::

  • Make sure any just recorded clip filename is not reused while over the same track and session. (CRITICAL)
  • LV2 Plug-in worker/schedule interface ring-buffer sizes have been increased to 4KB.
  • Fixed track-name auto-incremental numbering suffix when modifying any other track property.
  • WSOLA vs. (lib)Rubberband time-stretching options are now individualized on a per audio clip basis.
  • Long overdue, some brand new and fundamental icons revamp.
  • Fixed a tempo-map node add/update/remove rescaling with regard to clip-lengths and automation/curve undo/redo.
  • Fixed a potential Activate automation/curve index clash, or aliasing, for any plug-ins that change upstream their parameter count or index order, on sessions saved with the old plug-in versions and vice-versa.


Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.


Project page:


Git repos:

Wiki (help still wanted!):


Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun, always.

Flattr this

by rncbc at June 30, 2017 07:00 PM

June 28, 2017


TMS concert in Hamburg 1.7.2017

after Saturdays blast of a noise night at XB Liebig, getting ready for the next gig at Primal Uproar in Hamburg, where TMS will perform on Saturday 1.7.2017

we put a recording of the XB Liebig concert on Mixcloud:

by herrsteiner ( at June 28, 2017 03:13 PM

June 27, 2017

Audio – Stefan Westerfeld's blog

27.06.2016 beast-0.11.0 released

Beast is a music composition and modular synthesis application. beast-0.11.0 is now available at Support for Soundfont (.sf2) files has been added. On multicore CPUs, Beast now uses all cores for synthesis, which improves performance. Debian packages also have been added, so installation should be very easy on Debian-like systems. And as always, lots of other improvements and bug fixes went into Beast.

Update: I made a screencast of Beast which shows the basics.

by stw at June 27, 2017 01:17 PM

RPi 3 and the real time kernel

As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post.

So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi because I’d like to use a real time kernel to get sub 5ms system latency. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

And the RPi seems to run stable with acceptable latencies:

Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loopsHistogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops

So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though:

cyclictest -Sm -p 80 -n -i 500 -l 300000

And I ran hackbench in the background to create some load on the RPi:

(while true; do hackbench > /dev/null; done) &

Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3.

First you’ll need to git clone the Raspberry Pi kernel repository:

git clone -b 'rpi-4.9.y' --depth 1

This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi:

git clone

This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
export CONCURRENCY_LEVEL=$(nproc)

The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make).

Now that the environment variables are set we can create the initial kernel config:

cd linux
make bcm2709_defconfig

This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it:

cd ..
cd linux
xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1

Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it:

cd ..
cd linux
patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-run

You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch:

patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1

And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. You will need the libncurses5-dev package to run this commando so install that with apt-get install libncurses5-dev. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution.

With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages:

make clean
scripts/config --disable DEBUG_INFO
make -j$(nproc) deb-pkg

Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it:


Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine.

The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK.

Edit: there’s a thread on LinuxMusicians referring to this article which already contains some very useful additional information.

The post RPi 3 and the real time kernel appeared first on

by jeremy at June 27, 2017 09:25 AM

June 22, 2017

GStreamer News

GStreamer 1.12.1 stable release (binaries)

Pre-built binary images of the 1.12.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 22, 2017 01:15 PM