planet.linuxaudio.org

March 23, 2017

digital audio hacks – Hackaday

The Hard Way of Cassette Tape Auto-Reverse

The audio cassette is an audio format that presented a variety of engineering challenges during its tenure. One of the biggest at the time was that listeners had to physically remove the cassette and flip it over to listen to the full recording. Over the years, manufacturers developed a variety of “auto-reverse” systems that allowed a cassette deck to play a full tape without user intervention. This video covers how Akai did it – the hard way.

Towards the end of the cassette era, most manufacturers had decided on a relatively simple system of having the head assembly rotate while reversing the motor direction. Many years prior to this, however, Akai’s system involved a shuttle which carried the tape up to a rotating arm that flipped the cassette, before shuttling it back down and reinserting it into the deck.

Even a regular cassette player has an astounding level of complexity using simple electromechanical components — the humble cassette precedes the widespread introduction of integrated circuits, so things were done with motors, cams, levers, and switches instead. This device takes it to another level, and [Techmoan] does a great job of showing it in close-up detail. This is certainly a formidable design from an era that’s beginning to fade into history.

The video (found after the break) also does a great job of showing glimpses of other creative auto-reverse solutions — including one from Phillips that appears to rely on bouncing tapes through something vaguely resembling a playground slide. We’d love to see that one in action, too.

One thing you should never do with a cassette deck like this is use it with a cassette audio adapter like this one.


Filed under: digital audio hacks, teardown

by Lewin Day at March 23, 2017 03:31 PM

March 21, 2017

rncbc.org

Vee One Suite 0.8.1 - A Spring'17 release


Great news!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, , are once again out in the wild!

Still available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The common change-log for this dot release follows:

  • Fixed a probable old miss about changing spin-box and drop-down list not reflecting changes immediately into the respective parameter dial knobs.
  • Fixed middle-button clicking on the dial-knobs to reset to the current default parameter value.
  • Help/Configure.../Options/Use desktop environment native dialogs option is now set initially off by default.
  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go, thrice again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.1 (springl'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.8.1 (springl'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.1 (springl'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && have fun ;)

by rncbc at March 21, 2017 08:00 PM

March 17, 2017

Linux – CDM Create Digital Music

Steinberg brings VST to Linux, and does other good things

The days of Linux being a barren plug-in desert may at last be over. And if you’re a developer, there are some other nice things happening to VST development on all platforms.

Steinberg has quietly rolled out the 3.6.7 version of their plug-in SDK for Windows, Mac, iOS, and now Linux. Actually, your plug-ins may be using their SDK even if you’re unaware – because many plug-ins that appear as “AU” use a wrapper from VST to Apple’s Audio Unit. (One is included in the SDK.)

For end users, the important things to know are, you may be getting more VST3 plug-ins (with some fancy new features), and you may at last see more native plug-ins available for Linux. That Linux support comes at just the right time, as Bitwig Studio is maturing as a DAW choice on the platform, and new hardware options like the Raspberry Pi are making embedded solutions start to appeal. (I kind of hesitate to utter these words, as I know that desktop Linux is still very, very niche, but – this doesn’t have to mean people installing Ubuntu on laptops. We’ll see where it goes.)

For developers, there’s a bunch of nice stuff here. My favorites:

cmake support

VST3 SDK on GitHub: https://github.com/steinbergmedia/vst3sdk

GPL v3 license is now alongside the proprietary license (necessary for some open projects)

How ’bout them apples? I didn’t expect to be following Steinberg on GitHub.

The open license and Linux support to me suggest that, for instance, finally seeing Pure Data work with plug-ins again could be a possibility. And we’ll see where this goes.

This is one of those that I know is worth putting on CDM, because the handful of people who care about such things and can do something with them are reading along. So let us know.

More:

http://sdk.steinberg.net

Thanks, Spencer Russell!

The post Steinberg brings VST to Linux, and does other good things appeared first on CDM Create Digital Music.

by Peter Kirn at March 17, 2017 06:41 PM

digital audio hacks – Hackaday

Neural Network Composes Music; Says “I’ll be Bach”

[carykh] took a dive into neural networks, training a computer to replicate Baroque music. The results are as interesting as the process he used. Instead of feeding Shakespeare (for example) to a neural network and marveling at how Shakespeare-y the text output looks, the process converts Bach’s music into a text format and feeds that to the neural network. There is one character for each key on the piano, making for an 88 character alphabet used during the training. The neural net then runs wild and the results are turned back to audio to see (or hear as it were) how much the output sounds like Bach.

The video embedded below starts with a bit of a skit but hang in there because once you hit the 90 second mark things get interesting. Those lacking patience can just skip to the demo; hear original Bach followed by early results (4:14) and compare to the results of a full day of training (11:36) on Bach with some Mozart mixed in for variety. For a system completely ignorant of any bigger-picture concepts such as melody, the results are not only recognizable as music but can even be pleasant to listen to.

MIDI describes music in terms of discrete events, and individual note starts and stops are separate events. Part of the reformatting process involved representing each note as a single ASCII character, thereby structuring the music more like text and less like keyboard events.

The core of things is this character-based Recurring Neural Network which is itself the work of Andrej Karpathy. In his words, “it takes one text file as input and trains a Recurrent Neural Network that learns to predict the next character in a sequence. The RNN can then be used to generate text character by character that will look like the original training data.” How did [carykh] actually use this for music? With the following process:

  1. Gather source material (lots and lots of MIDI files of Bach pieces for piano or harpsichord.)
  2. Convert those MIDI files to CSV format with a tool.
  3. Tokenize and reformat that CSV data with a custom Processing script: one ASCII character now equals one piano key.
  4. Feed the RNN with the resulting text.
  5. Take the ouput of the RNN and convert it back to MIDI with the reverse of the process.

[carykh] shares an important question that was raised during this whole process: what was he actually after? How did he define what he actually wanted? It’s a bit fuzzy: on one hand he wants the output of the RNN to replicate the input as closely as possible, but he also doesn’t actually want complete replication; he just wants the output to take on enough of the same patterns without actually copying the source material. The processing of the neural network never actually “ends”; [carykh] simply pulls the plug at some point to see what the results are like.

Neural Networks are a process rather than an end result and have varied applications, from processing handwritten equations to helping a legged robot squirm its way to a walking gait.

Thanks to [Keith Olson] for the tip!


Filed under: digital audio hacks, musical hacks

by Donald Papp at March 17, 2017 11:01 AM

Audio, Linux and the combination

new elektro project 'BhBm' : Hydrogen + analogue synths

It has been a long, long time since i posted anything here !


Let me present you our newest elektro project BhBm (short for "Black hole in a Beautiful mind")

All drums and samples are done with H2.  Almost all bass and melody melodies are analogue synths controlled by H2 via MIDI.

Softsynths and FX are done using Carla and LV2 plugins.

I use H2 as a live sequencer in stacked pattern mode controlled by a BCR200 running, so there is no 'song', ony patterns that are enabled/disabled live > great fun !!

Check out our demo songs on Soundcloud :
Or follow us on Facebook


Enjoy and comment please !
Thijs

by noreply@blogger.com (Thijs Van Severen) at March 17, 2017 08:07 AM

March 16, 2017

open-source – CDM Create Digital Music

Your Web browser now makes your MeeBlip synth more powerful, free

Open a tab, design a new sound. Now you can, with a free Web editor for the MeeBlip. And it shows just how powerful the browser can be for musicians.

Watch:

And if you own a MeeBlip (triode or anode), give it a try yourself (just remember to plug in a MIDI interface and set up the channel and port first):
https://editor.meeblip.com/

Don’t own a MeeBlip? We can fix that for you:
https://meeblip.com/

Why a browser? Well, the software is available instantly, from anywhere with an Internet connection and a copy of Chrome or Opera. It’s also instantly updated, as we add features. And you can share your results with anyone else with a MeeBlip, too.

That means you can follow our new MeeBlip bot account and discover new sounds. It might be overkill with a reasonable simple synth, but it’s a playground for how synths can work in our Internet-connected age. And we think in the coming weeks we can make our bot more fun to follow than, um, some humans on Twitter.

Plus, because this is all built with Web technologies, the code is friendly to a wide variety of people! (That’s something that might be less true of the Assembly code the MeeBlip synth hardware runs.)

You can have a look at it here. Actually, we’re hoping someone out there will learn from this, modify it, ask questions – whatever! So whether you’re advanced or a beginner, do have a look:

https://github.com/MeeBlip/meeblip-web

All the work on the editor comes to us from musician and coder Ben Schmaus, based on an earlier version – totally unsolicited, actually, so we’re amazed and grateful to get this. We asked Ben for some thoughts on the project.

CDM: How did you get into building these Web music tools in the first place?

Ben: I had been reading about the Web MIDI and Audio APIs and thinking about how I might use them. I bought an anode Limited Edition synth and wanted a way to save patches I created. I thought it’d be cool and maybe even useful to be able to store and share patches with URLs, the lingua franca of the web. Being a reasonably capable web developer it seemed pretty approachable and so I started working on Blipweb. [Blipweb was the earlier iteration of the same editor tool. -Ed.]

Why the MeeBlip for this editor?

Well, largely because I had one! And the (admirably concise) quick start guide very clearly outlined all the MIDI CC numbers to control mappings. So it seemed very doable. Plus being already open source I thought it would be nice to contribute something to the user community.

What’s new in the new MeeBlip editors versus the original Blipweb?

The layout and design is tighter in the new versions. I added a very basic sequencer that has eight steps and lets you control pitch and velocity. It’s nice because you can produce sound with just a MeeBlip, MIDI interface, and browser. There’s also a simple patch browser that has some sample patches loaded into it that could be expanded in a few different ways in the future. Aside from the visible changes the code was restructured quite a bit to enable sharing between the anode and triode editors. The apps are built using JQuery, because I know it and it also had a nice knob UI widget. If I were starting from scratch today, I’d probably build the editors using React (developed by Facebook), which improves upon the JQuery legacy without over-complicating things.

Why do this in a browser rather than another tool?

There’s the practical aspect of me being familiar with web technologies. Combining that with the fact that Chromium-based browsers implement Web MIDI, the browser was a natural target platform. I’m not sure where Web MIDI is going. It’s obviously a very niche piece of functionality, but I also think it’s super useful to be able to pull up a web page and start interacting with hardware gear without having to download a native app. The ease of access is pretty compelling, and the browser is a great way to reach lots of OSes with minimal effort.

You also built this terrific Web MIDI console. How are you using that – or these other tools – in your own work and music?

The Web MIDI console is a tool to inspect MIDI messages sent from devices. I updated it recently after being inspired by Geert Bevin’s sendMIDI command line utility. So now you can send messages to devices in addition to viewing them. I often use it to see what messages are actually coming from my devices. I’ve written a few controller scripts for Bitwig Studio and the MIDI console has come in handy for quickly seeing which messages pads, knobs, sliders, etc. send. There are, of course, native apps that do this sort of thing, but again, it’s nice to just open a web page and have a quick look at a MIDI data stream or send some messages.

What was your background; how did you learn these Web technologies?

I studied music in college and learned just enough web dev skills through some multimedia courses to get a job making web pages back around 2000. It was more enjoyable than the random day jobs/teaching guitar lessons/wedding band gigs I was doing so I decided to pursue it seriously. Despite starting out in web/UI development, I’ve spent more time working on back-end services. I was an engineering director at Netflix and worked there in the Bay Area for five years before moving back to the east coast last summer. I’ve been spending more time working on music software lately and hope to find opportunities to continue it.

Did you learn anything useful about these Web technologies? Where do you think they’ll go next? (and will we ever use a Chromebook for MIDI?)

Well, if you want the broadest compatibility across browsers you need to serve your Web MIDI app over HTTPS. For example, Opera doesn’t allow MIDI access over HTTP. 🙂 I’m not sure where it’s going, really. It’d be nice to see Web MIDI implemented in more browsers. People spend so much time in their browsers these days, so it seems reasonable for them to become more tightly integrated with the underlying OS. Though it’s a bit hard to find strong incentive for browser vendors to support MIDI. Nonetheless, I’m glad it’s available in Chrome and Opera.

I think Web MIDI apps work quite well as tools in support of other products. Novation’s browser apps for Circuit are really well done and move Web MIDI beyond novelty. I hope the MeeBlip editors do the same. I also like Soundtrap and think Web MIDI/Audio apps work well in educational contexts since browsers are by and large ubiquitously accessible.

Ed.: For more on this topic of SSL and MIDI access, Ben wrote a companion blog post whilse developing this editor:

Web MIDI Access, Sysex, and SSL

Why make these tools open source? Does it matter that the MeeBlip is open source hardware?

It absolutely matters that MeeBlip is open source. That’s the main reason I bought into it. I really like the idea of open and hackable products that let users participate in their further development. It’s especially cool to see companies that are able to build viable businesses on top of open products.

In the case of the editors, they’re (hopefully!) adding value to the product; there’s no competitive advantage in having a patch editor by itself. It makes sense to open source the tools and let people make and share their own mods. And maybe some of that work feeds back into the main code line to the benefit of the broader user base. I think open source hardware/software products tend to encourage more creative and vibrant user communities.

What other useful browser music stuff do you use? Any tips?

Hmm…the Keith McMillen blog has some good posts on using the Web MIDI API that I’ve referred to a number of times. And there’s a Google music lab site with some interesting projects. Although I don’t have a Circuit or reface synth, it’s nice to see Novation [see our previous story] and Yamaha (Soundmondo) with Web MIDI apps, and they look useful for their users. I’m curious to see what new things pop up!

Thanks, Ben! Yes, we’ll be watching this, too – developers, users, we’d love to hear from you! In the meantime, don’t miss Ben’s site. It’s full of cool stuff, from nerdy Web MIDI discussions to Bitwig and SuperCollider tools for users:

https://factotumo.com/

And see you on the editor!

https://meeblip.com/https://editor.meeblip.com/

The post Your Web browser now makes your MeeBlip synth more powerful, free appeared first on CDM Create Digital Music.

by Peter Kirn at March 16, 2017 06:55 PM

March 11, 2017

MOD Devices Blog

MOD at NAMM 2017 - Recap

Greetings, fellow music freaks!

So you might have heard that we went to the NAMM show with MOD Devices. I spent the first few days around LA together with the Modfather, Gianfranco. Later on we met up with the rest of the team for a very busy yet very exciting week!

Early in the morning on the 16th of January we flew from Berlin to LAX. Upon arrival we discovered that our luggage was not loaded over to the switchover flight in Dusseldorf. Ouch, now we have to wait until Thursday (the evening of the first day of the show) before we get the equipment we need! We decided to take it easy, so we went and got our rental van and drove home, but not before eating an obnoxious amount of hot wings. We’re in America after all. This evening we simply did some grocery and essential supplies shopping.

When I woke up the next morning I went outside and my mood instantly changed: the beautiful California sky, the palm trees, quite the opposite of the cold Berlin I got used to.

Our lovely backyard in Cali

After a nice breakfast in the sun we went out to grab some extra items from the stores because of the luggage issues. Shout out to Davier from Guitar Center Orange County for helping us out with all our PA and cabling needs! That was all for the day.

The next day we continued our quest to making our booth as awesome as possible. We started setting up that day. Later on, we met up with our ever-happy Adam, good vibrations and laughs all around! This evening we also got together with Derek and Dean (the most helpful NS/Stick player in the universe, who even uses his MOD Duo to charge his phone). To end the day, we had a great time and a lovely meal at a cantina in Fullerton.

Thursday: showtime! We got up early, and went straight to the convention center for the last bits of setup. Today was a very relaxed day, some interesting people stopped by, and the overall response seemed to be very positive about the MOD Duo. For me personally this was the first time meeting Alexandre in real life, since you might know that a lot of work happens on a remote basis inside MOD Devices. During the day there were small jams and improvisations done by our one and only Adam and Dean.

Lots of interesting jams with crazy instruments

Friday: day two of the show. Besides the load of meetings that Gianfranco and Alexandre had to attend, this was actually a pretty chill day. When Alexandre came back from a meeting, Dean told us that Jordan Rudess, one of Alexandre’s big inspirations, was doing a demo at a booth really close by. Of course he had to go check that out! Most of the day was spent wowing people with the awesome MOD Duo, and having some cool improvisations as the day passed by. Dean had invited us to join him to the Stick Night at Lopez & Lefty’s, a gathering of really interesting musicians playing instruments that baffle a simple minded 6-string player like me. They were accompanied by some truly wonderful electronic percussion, and to top it off, they served a great margarita there!

Saturday: the busiest day of the show. They say that the Saturday always turns out interesting, and it did! We met loads of cool people, had a small jam with the Jamstick MIDI controller (there might be more on that in a later post!), ate a Viking hotdog and were visited by the legendary experimental guitarist Vernon Reid. The keyboard player for The Devin Townsend Project also stopped by our booth for a chat. At the end of a long day, we were pleasantly surprised when Stevie Wonder himself appeared in a booth nearby. The picture below shows me taking a picture of people taking a picture of people taking a picture of the legend.

Behind all these people taking picture, there really was Stevie Wonder

When we got home from what seemed like the longest but best day yet, we decided that we needed to chill out a little bit. So we threw a small BBQ party in our backyard. Luckily our AirBnb had a big American-style grill!

Sunday: the last day of the show. It was raining like crazy and people were noticeably tired. Some people had even lost their voices completely. That did not hold us back from having the greatest jam session NAMM has ever experienced. Adam’s musical (evil-) genius joined forces with Sascha on the electric Harp and an amazing steampunk guy on the smartphone Ocarina. It was magnificent. If the footage survived you will be sure to see it later on. This day we also met up with Sarah Lipstate (Noveller) to introduce her to the MOD Duo. We’re looking forward to your creations Sarah! Later on in the day Gianfranco was interviewed by Sound on Sound. You can find footage of the interview here.

On Sundays the NAMM show shuts down a bit early, there is a crazy-quick teardown that happens in a matter of minutes from the moment it hits 17:00. We packed up, drove back to our apartment and decided to hit the Two Saucy Broads once again for some lovely pizza. Good night everybody.

On our last day we visited Hollywood’s Rockwalk at the Guitar Center on Sunset Boulevard. They have a couple of really awesome guitars lying around there! After returning our rental van all that was left to do was to go straight to the airport for our flight back to Berlin.

NAMM, you have been great, until we meet again!

  • Jesse @ MOD HQ

PS: Special thanks go to Dean Kobayashi for helping us out tremendously during and before the show!

March 11, 2017 04:20 AM

March 10, 2017

MOD Devices Blog

MOD Duo 1.3 Update Now Available

Greetings fellow MOD users!

Another software update has popped up, courtesy of our development team, who works tirelessly to bring all the features you have been asking for and then some!

So, the next time you open the MOD web interface you’ll receive an update notification, just click on the tooltip icon in the bottom-right when that happens, then ‘Download’ and finally ‘Upgrade Now’. Wait for a few minutes while the MOD updates itself automatically and enjoy your added features.

Here’s a description of the major improvements:

  • Pedalboard Presets

Such an important and awaited feature, pedalboard presets have been a subject on the MOD forum for months. The MOD Duo is a relative revolution in terms of rig portability but several users felt they needed to be able to quickly and seamlessly change multiple plugins at the same time on stage. This was referred to as creating “scenes” inside a pedalboard. Now you can store values of parameters inside your pedalboards (such as the plugins that are on/off, their levels and other configs) and switch them all at once without having to load a new pedalboard. You can address this list of presets to any controller or footswitch!

  • Click-less Bypass

Who likes noise when turning a plugin on and off? No one I’d wager. That’s why there’s a new feature in the LV2 plugin world called click-less bypass and we now support this designation on plugins that include it. This means you’ll be able to bypass plugins and avoid that little “click” noise. For now only “Parametric EQ” by x42 includes this feature, but it will soon get picked up more and more by developers.

Also, our True Bypass plugin, aptly called “Hardware Bypass”, is now available if you want to use it on your pedalboard and activate it via footswitch!

  • ‘MIDI Utility’ Category

So… How about that ‘Utility” category on the pedalboard builder and cloud store? Pretty packed right? Well, since it has quickly got filled with MIDI utilities, we decided to keep things nice and tidy and have added the new ‘MIDI Utility’ category. That’s what happens when you’ve got hundreds of plugins ;)

  • Generic USB Joysticks as MIDI Devices

Personally, I’m not really sure I understand why someone would like to use a joystick as a MIDI controller but hey! A MOD device is about creative freedom, right? And we’re also about not getting stunted by proprietary technology. That’s why we couldn’t accept the fact that previously we could only use PS3 and PS4 joysticks over USB. Now thanks to @Azza (and some little extra integration…) we can use any joystick recognized by the MOD as a MIDI device. Buttons will send MIDI notes and CCs staring at #90 while Axis send MIDI CCs starting at #1. We’ll soon do a webinar on the subject of controllers for the stage, so this use case might spring up there and I will learn something!

There’s also quite a few more changes and tweaks. Visit our wiki to see all the changes since v1.2.1.

The next update will focus on the control chain controllers that are coming to the Kickstarter backers and that will be available to test by the community very soon. For more information, keep tuned on our forum!

Enjoy your pedalboards and the beautiful sounds that they make, share them, have fun with your added controllability, and keep helping us build the future of musical effects!

  • Dwek @ MOD HQ

March 10, 2017 05:10 PM

March 05, 2017

autostatic.com

Moved to Fuga

Moving my VPS from VMware to Fuga was successful. First I copied the VMDK from the ESXi host to a Fuga instance with enough storage:

scp some.esxi.host:/vmfs/volumes/storage-node/autostatic1.autostatic.cyso.net/autostatic1.autostatic.cyso.net-flat.vmdk ./

And then converted it to QCOW2 with qemu-img:

qemu-img convert -O qcow2 autostatic1.autostatic.cyso.net-flat.vmdk autostatic1.autostatic.cyso.net.qcow2

Next step was mounting it with guestmount:

guestmount -a /var/www/html/images/autostatic1.autostatic.cyso.net.qcow2 -m /dev/sda8 /mnt/tmp/

And changing some settings, i.e. network and resolvconf. When that was done I unmounted the image:

guestunmount /mnt/tmp

And uploaded it to my Fuga tenant:

openstack image create --disk-format qcow2 --container-format bare --file /path/to/images/autostatic1.autostatic.cyso.net.qcow2 --private autostatic1.autostatic.cyso.net.qcow2

Last step was launching an OpenStack image from this image, I used Ansible for this:

- name: Launch OpenStack instance
  hosts: localhost
  connection: local
  gather_facts: no
  vars:
    os_flavor: c1.large
    os_network: int1
    os_image: 5b878fee-7071-4e9c-9d1b-f7b129ba0644
    os_hostname: autostatic1.autostatic.cyso.net
    os_portname: int-port200
    os_fixed_ip: 10.10.10.200
    os_floating_ip: 185.54.112.200

  tasks:
    - name: Create port
      os_port:
        network: "{{ os_network }}"
        fixed_ips:
          - ip_address: "{{ os_fixed_ip }}"
        name: "{{ os_portname }}"

    - name: Launch instance
      os_server:
        state: present
        name: "{{ os_hostname }}"
        timeout: 200
        flavor: "{{ os_flavor }}"
        nics:
          - port-name: "{{ os_portname }}"
        security_groups: "{{ os_hostname }}"
        floating_ips: "{{ os_floating_ip }}"
        image: "{{ os_image }}"
        meta:
          hostname: "{{ os_hostname }}"

And a few minutes later I had a working VPS again. While converting and uploading I made the necessary DNS changes and by the time my VPS was running happily on Fuga all DNS entries pointed to the new IP address.

The post Moved to Fuga appeared first on autostatic.com.

by jeremy at March 05, 2017 03:53 PM

March 02, 2017

Libre Music Production - Articles, Tutorials and News

Open Stage Control, v0.17.0 is released, now with MIDI support!

Open Stage Control, v0.17.0 is released, now with MIDI support!

Open Stage Control has just seen the release of v0.17.0. Open Stage Control is a libre desktop OSC bi-directional control surface application built with HTML, JavaScript & CSS and run as a Node / Electron web server that accepts any number of Chrome / Chromium / Electron clients.

by Conor at March 02, 2017 07:47 AM

March 01, 2017

Scores of Beauty

LilyPond at the Google Summer of Code 2017

LilyPond has been mentoring students’ projects several times in the “Google Summer of Code” program in previous years, and this year we intend to take that to a new level: both the LilyPond and Frescobaldi projects will be able to accept students. (Frescobaldi is one of the two major LilyPond editing environments, the other being Denemo.) Students can now consider suitable projects to apply for, from March 20 to April 03 2017.

Google Summer of Code

Google Summer of Code (GSoC) is a grant program funded by Google to support and drive forward (Free) Open Source Software development. Of course it is not a charity and serves an economic purpose for Google on the long run, but as a project itself there is no substantial catch to it, and many respected FLOSS projects such as for example Mozilla, LibreOffice and The GNU Project are happy to participate regularly.

From the program website:

Google Summer of Code is a global program focused on bringing more student developers into open source software development. Students work with an open source organization on a 3 month programming project during their break from school.”

The idea is to create additional value by funding students (who have to be enrolled as full-time students in an official university) to work with an Open Source project. It is obviously a nice thing for a student to be able earning money by programming instead of doing arbitrary temporary work over the summer break. And it is also a nice thing for Open Source projects to have someone get paid to do work that might otherwise not get done. But there’s more to it: the student does not only get paid for some work but is also getting familiar with Open Source development in general and may (hopefully) become a respected member of the community he/she is working with. And the project doesn’t only get some “things done” but hopefully a new contributor beyond GSoC.

LilyPond @ GSoC 2017

GNU LilyPond has been part of GSoC under the umbrella of The GNU Project, and we accepted students in 2012, 2015 and 2016. For 2017 GNU has been accepted again, so LilyPond is open for students’ applications as well. But in addition the Frescobaldi LilyPond editor has also applied as a mentoring organization for 2017 – and has been accepted too! So this year there is even more room and a wider range of options for students to join GSoC and get involved with improving LilyPond and its ecosystem.

If you think this is an interesting thing but don’t want to apply for it yourself please do us a favor and still share this information (e.g. the link to this post) as widely as possible. If you are not completely sure if you are eligible for participating you may start reading the program’s FAQ page. Otherwise you may directly start looking at the Project Ideas pages where we have put together a number of project suggestions with varying difficulties and required skills (programming languages) that we consider both very important for our development and suitable for a student’s summer project. Please note that students may as well come up with their own project suggestions (which would be welcome because it implies a deep personal interest in the idea). The following bullet list gives an overview of the range of possibilities while the project pages give much more detail.

LilyPond GSoC page

  • Internal representation of chord structure
  • Adopt the SMuFL font encoding standard
  • Adding variants of font glyphs
  • Create (the foundation for) a Contemporary Notation library
  • (Re-)create a LilyPond extension for LibreOffice
  • Testing/documentation infrastructure for openLilyLib
  • Work on MusicXML import/export

Frescobaldi GSoC page

  • Enhancing the Manuscript Viewer
  • Improve MIDI/sound support
  • Add support for version control (Git)
  • Implement system-by-system handling of scores (partial compilation)
  • Add a user interface for openLilyLib packages
  • Improving the internal document tree representation
  • Improve Frescobaldi’s MusicXML export (possibly in conjunction with the LilyPond project)

See you on the mailing lists!

by Urs Liska at March 01, 2017 12:01 AM

February 28, 2017

Linux – CDM Create Digital Music

Someone at Bitwig is working with Ableton Link on GitHub

One postlude to the Bitwig announcement – yes, someone at Bitwig has forked Ableton Link support. Have a look:

bitwiglink

Thanks to one sharp-eyed Twitter reader for catching this one!

https://github.com/bitwig/link

The reason is interesting – ALSA clock support on Linux, which would make working with Link on that OS more practical.

Now, Ableton has no obligation to support Bitwig as far as integrating Link into the shipping version of Bitwig Studio. Proprietary applications not wanting to release their own code as GPLv2 need a separate license. On the other hand, this Linux note suggests why it could be useful – Bitwig are one of the few end user-friendly developers working on desktop Linux software. (The makers of Renoise and Ardour / Harrison MixBus are a couple of the others; Renoise would be welcome.) But we’ll see if this actually happens.

In the meantime, Bitwig are contributing back support for Linux to the project:

merge

The post Someone at Bitwig is working with Ableton Link on GitHub appeared first on CDM Create Digital Music.

by Peter Kirn at February 28, 2017 10:28 PM

Bitwig Studio 2 is here, and it’s full of modulators and gadgets

Go go gadget DAW. That’s the feeling of Bitwig Studio 2, which is packed with new devices, a new approach to modulation, and hardware integration.

Just a few of these on their own might not really be news, but Bitwig has a lot of them. Put them together, and you’ve got a whole lot of potential machinery to inspire your next musical idea, in the box, with hardware, or with some combination.

And much as I love playing live and improvising with my hands, it’s also nice to have some clever machinery that gets you out of your usual habits – the harmonies that tend to fall under your fingers, the lame rhythms (okay, that’s me I’m talking now) that you’re able to play on pads.

Bitwig 2 is full of machinery. It’s not the complete modular environment we might still be dreaming of, but it’s a box full of very powerful, simple toys which can be combined into much more complex stuff, especially once you add hardware to it.

A few features have made it into the final Bitwig Studio 2 that weren’t made public when it first was announced a few weeks ago.

That includes some new devices (Dual Pan!), MIDI Song Select (useful for triggering patterns and songs on external hardware like drum machines), and controller API additions.

The controller API is a dream if you’ve come from (cough) a particular rival tool. Now you can code in Python, but with interactive feedback, and performance – already quite nice – has been improved.

I’m just going to paste the whole list of what’s new, because this particular update is best understood as a “whole big bag of new things”:

NEW FEATURES AND UPDATES

A re-conceptualized Modulation System
Numerous device updates, including dynamic displays and spectrum analyzers
Remote controls
Fades and crossfades
VST3 support
Better hardware integration
Smart tool switching
Improved editor workflow
MIDI timecode support
New menu system
Dashboard
Notification system
Adjustable track height in arranger
Controller API improvements
…and much more

25 ALL NEW MODULATORS

4-Stage
ADSR
AHDSR
Audio Sidechain
Beat LFO
Button
Buttons
Classic LFO
Envelope Follower
Expressions
HW CV In
Keytrack
LFO
Macro-4
Macro
Math
MIDI
Mix
Note Sidechain
Random
Select-4
Steps
Vector-4
Vector-8
XY

17 ENTIRELY NEW DEVICES
Audio FX

Spectrum analyzer
Pitch shifter
Treemonster
Phaser
Dual Pan

Hardware Integration Devices

MIDI CC
MIDI Program Change
MIDI Song Select
HW Clock Out
HW CV Instrument
HW CV Out

Note Effects

Multi-Note
Note Echo
Note Harmonizer
Note Latch
Note Length
Note Velocity

At some point, we imagined what we might get from Bitwig – beneath that Ableton-style arrangement and clip view and devices – was a bare-bones circuit-building modular, something with which you could build anything from scratch. And sure enough, Bitwig were clear that every function we saw in the software was created behind the scenes in just such an environment.

But Bitwig haven’t yet opened up those tools to the general public, even as they use them in their own development workflow. But the new set of modulation tools added to version 2 shouldn’t be dismissed – indeed, it could appeal to a wider audience.

Instead of a breadboard and wires and soldering iron, in other words, imagine Bitwig have given us a box of LEGO. These are higher-level, friendlier, simple building blocks that can nonetheless be combined into an array of shapes.

To see what that might look like, we can see what people in the Bitwig community are doing with it. Take producer Polarity, who’s building a free set of presets. That free download already sounds interesting, but maybe just as much is the way inw which he’s going about it. Via Facebook:

The modulation approach I think is best connected to Propellerhead Reason – even though Reason has its own UI paradigm (with virtual patch cords) and very distinct set of devices. But while I wouldn’t directly compare Reason and Bitwig Studio, I think what each can offer is the ability to create deeply customized performance and production environments with simple tools – Reason’s behaving a bit more like hardware, and Bitwig’s being firmly rooted in software.

There’s also a lot of stuff in Bitwig Studio in the way of modernization that’s sorely missing from other DAWs, and notably Ableton Live. These have accumulated in a series of releases – minor on their own, but starting to paint a picture of some of what other tools should have. Just a few I’d like to see elsewhere:

  • Plug-in sandboxing for standard formats that doesn’t bring down the whole DAW.
  • Extensive touch support (relevant to a lot of new Windows hardware)
  • Support for expressive MIDI control and high-resolution, expressive automation, including devices like the ROLI hardware and Linnstrument (MPE).
  • An open controller API – one that anyone can use, and that allows hardware control to be extended easily.
  • The ability to open multiple files at once (yeah, kind of silly we have to even say that – and it’s not just Ableton with this limitation).
  • All that, and you can install Bitwig on Linux, too, as well as take advantage of what are now some pretty great Windows tablets and devices like the Surface line.

    There’s also the sense that Bitwig’s engineering is in order, whereas more ‘legacy’ tools suffer from unpredictable stability or long load times. That stuff is just happiness killing when you’re making music, and it matters.

    So, in that regard, I hope Bitwig Studio 2 gets the attention of some of its rivals.

    But at the same time, Bitwig is taking on a character on its own. And that’s important, too, because one tool is never going to work for everyone.

    Find out more:
    https://www.bitwig.com/en/bitwig-studio/bitwig-studio-2

    The post Bitwig Studio 2 is here, and it’s full of modulators and gadgets appeared first on CDM Create Digital Music.

    by Peter Kirn at February 28, 2017 07:48 PM

    ardour

    Ardour 5.8 released

    Although Ardour 5.6 contained some really great new features and important fixes, it turned out to contain a number of important regressions compared to 5.5. Some were easily noticed and some were more obscure. Nobody is happy when this happens, and we apologize for any frustration or issues that arose from inadequate testing of 5.6.

    To address these problems, we are making a quick "hotfix" release of Ardour 5.8, which also brings the usual collection of small features and other bug fixes.

    Linux distributions are asked to immediately and promptly replace 5.6 with 5.8 to reduce issues for Ardour users who get the program directly via their software management tools.

    Download  

    Read more below for full details ...

    read more

    by paul at February 28, 2017 12:33 PM

    February 27, 2017

    open-source – CDM Create Digital Music

    Now you can sync up live visuals with Ableton Link

    Ableton Link has already proven itself as a way of syncing up Ableton Live, mobile apps (iOS), and various desktop apps (Reason, Traktor, Maschine, and more), in various combinations. Now, we’re seeing support for live visuals and VJing, too. Three major Mac apps have added native Ableton Link support for jamming in the last couple of weeks: CoGe, VDMX, and a new app called Mixvibes. Each of those is somewhat modular in fashion, too.

    Oh, and since the whole point of Ableton Link is adding synchronization over wireless networks or wired networking connections with any number of people jamming, you might use both apps together.

    CoGe

    Here’s a look at CoGe’s Ableton Link support, which shows both how easy configuration is, and how this can be used musically. In this case, the video clip is stretching to the bar — making CoGe’s video clips roughly analogous to Ableton Live’s audio clips and patterns:

    CoGe is 126.48€, covering two computers – so you could sync up two instances of CoGe to separate projectors, for instance, using Link. (And as per usual, you might not necessarily even use Ableton Live at all – it might be multiple visual machines, or Reason, or an app, or whatever.)

    http://imimot.com/cogevj/

    VDMX

    VDMX is perhaps an even bigger deal, just in terms of its significant market share in the VJ world, at least in my experience. This means this whole thing is about to hit prime time in visuals the way it has in music.

    VDMX has loads of stuff that is relevant to clock, including LFOs and sequencers. See this screen shot for some of that:

    vidvox_ableton

    Here are the developer’s thoughts from late last week:

    VDMX and Ableton Link integration [Vidvox Blog]

    Also, they reflect on the value of open source in this project (the desktop SDK is available on GitHub). They’ve got a complete statement on how open source contributions have helped them make better software:

    Open Source At VIDVOX

    That could easily be a subject of a separate story on CDM, but open source in visuals have helped make live performance-ready video (Vidvox’s own open Hap), made inter-app visuals a reality (Syphon), and has built a shader format that allows high-performance GPU code to be shared between software.

    Mixvibes

    I actually forgot to include this one – I’m working o a separate article on it. Mixvibes is a new app for mixing video and audio samples in sync. It was just introduced for Mac this month, and with sync in mind, included Ableton Link support right out of the gate. (That actually means it beat the other two apps here to market with Link support for visuals.) It runs in VST and AU – where host clock means Link isn’t strictly necessary – but also runs in a standalone mode with Link support.

    This is well worth a look, in that it stakes out a unique place in the market, which I’ll do as a separate test.

    http://www.mixvibes.com/remixvideo

    Now go jam

    So that’s two great Mac tools. There’s nothing I can share publicly yet, but I’ve heard other visual software developers tell me they plan to implement Ableton Link, too. That adds to the tool’s momentum as a de facto standard.

    Now, getting together visuals and music is easier, as is having jam sessions with multiple visual artists. You can easily tightly clock video clips or generative visuals in these tools to song position in supported music software, too.

    I remember attending various music and visual jams in New York years ago; those could easily have benefited from this. It’ll be interesting to see what people do.

    Watch CDM for the latest news on other visual software; I expect we’ll have more to share fairly soon.

    The post Now you can sync up live visuals with Ableton Link appeared first on CDM Create Digital Music.

    by Peter Kirn at February 27, 2017 07:43 PM

    OSM podcast

    GStreamer News

    GStreamer 1.11.2 unstable release (binaries)

    Pre-built binary images of the 1.11.2 unstable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    February 27, 2017 08:00 AM

    February 24, 2017

    GStreamer News

    GStreamer 1.11.2 unstable release

    The GStreamer team is pleased to announce the second release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

    Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

    Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

    February 24, 2017 02:00 PM

    GStreamer 1.10.4 stable release (binaries)

    Pre-built binary images of the 1.10.4 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    See /releases/1.10/ for the full list of changes.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    February 24, 2017 10:00 AM

    February 23, 2017

    autostatic.com

    Moving to OpenStack

    In the coming days I’m going to move the VPS on which this blog resides from VMware to the Fuga OpenStack cloud. Not because I have to but hey, if I can host my stuff on a fully open source based cloud instead of a proprietary solution the decision is simple. And Fuga has been around for a while now, it’s rock solid and as I have a lot of freedom within my OpenStack tenant I can do with my VPS whatever I want when it comes to resources.

    Moving the VM will cause some downtime. I’ve opted for the solution to shut down the VM, copy it from the ESXi host on which it lives to a server with enough storage and the libguestfs-tools package so that I can do some customization and the python-openstackclient package so that I can easily upload the customized image to OpenStack. Then I need to deploy an OpenStack instance from that uploaded image, switch the DNS and my server should be back online.

    The post Moving to OpenStack appeared first on autostatic.com.

    by jeremy at February 23, 2017 03:24 PM

    GStreamer News

    GStreamer 1.10.4 stable release

    The GStreamer team is pleased to announce the third bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

    This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

    See /releases/1.10/ for the full release notes.

    Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

    February 23, 2017 03:00 PM

    February 21, 2017

    open-source – CDM Create Digital Music

    Now you can sync Ableton Link to your Eurorack with this open gizmo

    Ableton Link has become the de facto, configuration-free, seamless sync and jamming protocol for software – with or without Ableton Live itself. (Even VJ app CoGe just joined the party.) Now, it’s time for hardware to get in on the fun.

    Vincenzo Pacella has been in touch for a while as he hacks away at a solution to connect Ableton Link to analog hardware and Eurorack. Now, it’s ready for prime time, as an inexpensive, easy-to-build, open source project based on Raspberry Pi.

    Jamming with Ableton Link is as easy as this:

    And then, all your analog gear can groove along, like so:

    What Vincenzo has done is to produce a custom shield for the crazy-tiny Raspberry Pi. Pop his custom board on top, add his software/scripts, and you’ve got plug-and-play Ableton Link support for all your hardware. That connects both clock and reset signals to your Eurorack (or other compatible) analog gear, so they can jam along with Ableton Live, Reason, Maschine, Reaktor, Max, Pd, iOS apps, and everything else that’s been adding Link support.

    There’s even a cute display and controls.

    It works with WiFi wireless networks. It works with Ethernet (via adapter). It even works without anything connected at all – then it’s just a clever little clock gadget.

    2

    1

    I imagine this could also be a great starter project for learning a bit about the state of what’s possible with Raspberry Pi (I found some of those links useful).

    You could also adapt this to MIDI – I might have to try that. Vincenzo notes that the Raspberry Pi Zero features a “UART (pin #8 and #10) which could be used for MIDI I/O.” Handy. (I would also have been inclined to go the Teensy route, but this may have changed my mind. Anyone interested in exploring, do get in touch – shout out via Twitter!)

    Thanks, Vincenzo!

    Let’s see those schematics:

    pink_zero_minimal

    schematic

    Blog post:

    pink-0, an Ableton Link to clock/reset hardware converter [shaduzlabs]

    And check out the project on GitHub:

    https://github.com/shaduzlabs/pink-0

    The post Now you can sync Ableton Link to your Eurorack with this open gizmo appeared first on CDM Create Digital Music.

    by Peter Kirn at February 21, 2017 07:19 PM

    February 18, 2017

    Libre Music Production - Articles, Tutorials and News

    Qtractor 0.8.1 released

    Qtractor 0.8.1 released

    Qtractor, the veteran Audio/MIDI multi-track sequencer, is getting dangerously close to the 1.0 roadmark.Release highlights:

    by yassinphilip at February 18, 2017 09:31 PM

    February 17, 2017

    rncbc.org

    Qtractor 0.8.1 - The Sticky Tauon is out!

    Hello everybody!

    Qtractor 0.8.1 (sticky tauon) is out!

    Release highlights:

    • JACK Transport mode switching on main menu/tool-bar (NEW)
    • Main menu Track/Navigate/Next, Previous wrap around (FIX)
    • Auto-backward play-head position flip-flopping (FIX)
    • JACK Transport tempo/time-signature in-flight changes (FIX)
    • Sanitized audio clip zoom-in/out resolution (FIX)

    Obviously, this is one dot bug-fix release and everyone is then compelled to upgrade. On the side, a couple of notes are also worthy of mention...

    Besides some other stray thoughts, you may be asking yourself, after reading those crappy release highlights above, what the heck that "in-flight" tempo / time-signature change-fix is all about?

    No stress. There's always a reason, as if reason won't ever prevail, above all else...

    So, the whole truth and nothing but the truth, should here be told: jack_link is the dang reason. And then, you may now know and play the badass with Ableton Link. Keep in mind that jack_link is kind of a toy, so please, have it under a child's perspective ;)

    You can still play all along with your band fellows, don't get me wrong. You all have to be on the same machine or in the same local network segment (LAN) anyway, just like qmidinet does (and recommends). But that's probably one hell of a disparate story, although sharing the same networking concept... move along!

    Whatever. When in doubt, please ask me. Whenever you find yourself in despair, you can also ask me. But take note that I made no promises nor guarantees that it would ever work for you. And this goes as far as in any formal disclaimer can go.

    The hard truth is: your are on your own. But please, enjoy and have (lots of) fun while you're at it ;)

    As second note, this project has finally called in for its own vanity and internet domain name: qtractor.org. I guess it was about time.

    Nuff said.

    Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

    Website:

    http://qtractor.sourceforge.net
    http://qtractor.org

    Project page:

    http://sourceforge.net/projects/qtractor

    Downloads:

    http://sourceforge.net/projects/qtractor/files

    Git repos:

    http://git.code.sf.net/p/qtractor/code
    https://github.com/rncbc/qtractor.git
    https://gitlab.com/rncbc/qtractor.git
    https://bitbucket.org/rncbc/qtractor.git

    Wiki (help still wanted!):

    http://sourceforge.net/p/qtractor/wiki/

    License:

    Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

    Change-log:

    • The View/Options.../Display/Dialogs/Use native dialogs option is now set initially off by default.
    • All tempo and time-signature labels are now displayed with one decimal digit, as it was in mostly everywhere else but the time ruler/scale headers.
    • JACK transport tempo and time-signature changes are now accepted, even though playback is not currently rolling; also, changing (JACK) Timebase master setting (cf.View/Options.../General/Transport/Timebase) will take effect immediately, not needing nor warning for session restart
      anymore.
    • Track/Navigate/Next and Previous menu commands, finally fixed to wrap around the current track list.
    • Current session (JACK) transport mode option switching is now being made accessible, from the main menu and drop-down toolbar buttons, as well as user configurable PC-keyboard and/or MIDI controller shortcuts (cf. Transport/Mode/None, Slave, Master, Full).
    • Fixed some auto-backward play-head position flip-flopping, when opening a new session while the previous was still on rolling/playing state, hopefully.
    • Added French man page (by Olivier Humbert, thanks).
    • MIDI clip changes are now saved unconditionally whenever the editor (piano-roll) is closed or not currently visible.
    • Audio clip peak/waveform files re-generation performance, scalability and resilience have been slightly improved.
    • Some sanitary checks have been added to audio clip peak/waveform re-generation routine, as much to avoid empty, blank, zero or negative-width faulty renderings.
    • Do not reset the Files tree-view widgets anymore, when leaving any drag-and-drop operation (annoyingly, all groups and sub-groups were being closed without appeal).
    • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

    Flattr this

     

    Enjoy && Keep the fun, as always.

    by rncbc at February 17, 2017 08:00 PM

    open-source – CDM Create Digital Music

    Here’s a cool handheld drum machine you can build with Arduino

    “I’m the operator with my pocket calculator…” — and now you’re the engineer/builder, too.

    This excellent, copiously documented project by Hamood Nizwan / Gabriel Valencia packs a capable drum machine into a handheld, calculator-like format, complete with LCD display and pad triggers. Assembly above and — here’s the result:

    It’s simple stuff, but really cool. You can load samples onto an SD card reader, and then trigger them with touch sensors, with visible feedback on the display.

    All of that is possible thanks to the Arduino MEGA doing the heavy lifting.

    handhelddrum

    The mission:

    The idea is to build a Drum Machine using Arduino that would simulate drum sounds in the 9 keys available in the panel. The Drum machine will also have a display where the user can see the sample name that is being played each time, and a set of menu buttons to go through the list of samples available.

    The Drum Machine will also use an SD Card Reader to make it possible for the user to store the audio samples, and have more playfulness with the equipment.

    https://physicalfinalproject.tumblr.com/

    Check out the whole project – and perhaps build (or modify) this project yourself!

    Got a drum machine DIY of your own? Let us know!

    The post Here’s a cool handheld drum machine you can build with Arduino appeared first on CDM Create Digital Music.

    by Peter Kirn at February 17, 2017 03:18 PM

    February 14, 2017

    Scores of Beauty

    Dead notes in tablature now work with any font

    The current stable version of LilyPond (2.18.2) has a pretty annoying limitation for tablature users. If you change the TabStaff font to something different from the default (Feta) and your score contains dead notes, you won’t see any symbol on the TabStaff, because the font you chose probably does not have a cross glyph. So, at least in these scores, you are forced to use Feta (a serif font) also for tablature. This implies that you may not be able to write a tablature book in sans serif font or you’ll have to sacrifice consistency. This was the case for a project of mine, where all the pieces without dead notes used a sans serif font, but I had to use serif in those pieces where dead notes were present. Fortunately this has been fixed in development version 2.19.55, released this week. Now my book project will have consistent font settings! Let’s see a simple example.

    Dead notes are represented by an X glyph printed either on a Staff or a TabStaff. The predefined commands are \xNote, \xNotesOn, \xNotesOff (and their synonyms \deadNote, \deadNotesOn, \deadNotesOff). If we change the tablature font to another font (e.g. Nimbus Sans) and compile the following snippet in version 2.18.2:

    \version "2.18.2"
    %\version "2.19.55"
    
    myMusic = \relative {
      \override TabNoteHead.font-name = #"Nimbus Sans L Bold"
      \xNote { c4 e f a }
      <c g e c>1
    }
    
    \score {
      \new StaffGroup <<
        \new Staff { \clef "treble_8" \myMusic }
        \new TabStaff { \clef "moderntab" \new TabVoice \myMusic }
      >>
      \layout {
        indent = 0
        \context {
          \Staff
          \omit StringNumber
        }
      }
    }
    
    

    we get the following warning for each note within the \xNote block:

    warning: none of note heads `noteheads.s' or `noteheads.u' found

    And the output will be the following (note the empty first measure in the TabStaff):

    no dead notes in tablature

    Version 2.18: no dead notes on tablature staves (if a custom font for tablature is set)

    If you compile the same snippet with version 2.19.55 or any later version, you’ll see the cross glyphs in the first measure of tablature:

    correct dead notes on tablature

    Version 2.19.55 or later: the X glyphs of dead notes are correctly printed on tablature staves

    For those interested in how this was technically achieved – as explained by Harm on issue 4931 – it is done by temporarily setting font-name to '(), causing the default font (usually Feta) to take over, and then reverting this later.
    This is an important bugfix for all tablature users who want to use a custom font for tablature numbers and care for graphical consistency in their projects. Kudos to Harm!

    by Federico Bruni at February 14, 2017 11:14 AM

    fundamental code

    Profiling MRuby Code

    Profiling MRuby Code

    I loathe inefficient, slow, or bloated software. Unfortunately, I’ve written plenty of code that’s terrible in this sense and if you program frequently, then I’d wager you have written quite a few inefficient lines of code as well. Fortunately code isn’t written in stone after the first pass and we as programmers can optimize it. Most of the time the speed of a single line of code doesn’t matter or contribute to the overall inefficiency in an application. Sometimes, though, a change to a few lines can make all the difference.

    Profilers

    Finding those lines of code however is a challenging task unless you have access to some means of profiling the program and finding out which sections of code are run the most frequently and the lines which use up the most time. Each programming language tends to have it’s own tools, though if you’re tackling a mixed ruby/C codebase then you might be familiar with: RubyProf, callgrind, gprof, or the poor man’s profiler.

    Based upon the title of this article I’m guessing that you may be interested in profiling the embeddable ruby implementation 'mruby'. While I was developing my own application using mruby, I tended to profile ruby code with timers:

    before = Time.new()
    run_routine()
    after  = Time.new()
    puts("run_routine took #{1000*(after-before)} ms")

    This provided limited information, but it helped direct the development to a few hotspots and then additional profiling of the C portion of the codebase was done with callgrind. Callgrind for me is a holy grail of profiling CPU bound programs. Through kcachegrind it provides an easy to explore callgraph, relative and absolute timing information, source level information, and information at the assembly level. Callgrind combined with kcachgrind make it easy to understand where the hotspots for a program are, what is the callgraph that generates these issues, and what compiled assembly creates the issues. If you’re trying to optimize a C/C++ codebase just use callgrind. It slows down the overall execution by at least an order of magnitude, but the information provided is superb.

    MRuby-Profiler

    Back to mruby, the initial timer based profiling doesn’t have many of the advantages of callgrind, or other profiler tools. Adding timers was a manual process, it provided very coarse information, and it provided only a limited slice of the whole program. As such, a full mruby VM profiler should be preferred. The mruby-profiler gem is one such tool.

    MRuby-profiler works by observing the progression of the mruby virtual machine via a callback which is invoked every time the VM executes a new ruby instruction. This allows for the profiler to have an exact count for how many times an instruction is invoked, a reasonably accurate time for the execution of each instruction, information about the call graph, and often detailed line-by-line source level times.

    Example

    Now that the mruby-profiler gem is introduced, let’s take a look at a simple example run. In the top-left you’ll see a simple ruby program, on the right mruby-profiler’s source annotated output, and in the bottom-left mruby-profiler’s no-source VM instruction only output.

    def test_code
        var1 = []
        100.times do |x|
            var1 << x
        end
    
        var1.each_with_index do |x, ind|
            var1[ind] = 0.1 + 2.0*x + 4.3*x**2.0
        end
    
        var1
    end
    
    test_code
    0000 0.00011 def test_code
            1 0.00002    OP_TCLASS     R1
            1 0.00003    OP_LAMBDA     R2      I(+1)   1
            1 0.00003    OP_METHOD     R1      :test_code
            1 0.00003    OP_ENTER      0:0:0:0:0:0:0
    0001 0.00004     var1 = []
            1 0.00004    OP_ARRAY      R2      R3      0
    0002 0.00186     100.times do |x|
            1 0.00002    OP_LOADI      R3      100
            1 0.00005    OP_LAMBDA     R4      I(+1)   2
            1 0.00007    OP_SENDB      R3      :times  0
          100 0.00172    OP_ENTER      1:0:0:0:0:0:0
    0003 0.00957         var1 << x
          100 0.00142    OP_GETUPVAR   R3      2       0
          100 0.00128    OP_MOVE       R4      R1
          100 0.00479    OP_SEND       R3      :<<     1
          100 0.00208    OP_RETURN     R3      return
    0004 0.00000     end
    0005 0.00000
    0006 0.00186     var1.each_with_index do |x, ind|
            1 0.00002    OP_MOVE       R3      R2
            1 0.00004    OP_LAMBDA     R4      I(+2)   2
            1 0.00006    OP_SENDB      R3      :each_with_index 0
          100 0.00175    OP_ENTER      2:0:0:0:0:0:0
    0007 0.03609         var1[ind] = 0.1 + 2.0*x + 4.3*x**2.0
          100 0.00142    OP_LOADL      R4      L(0)    ; 0.1
          100 0.00139    OP_LOADL      R5      L(1)    ; 2
          100 0.00142    OP_MOVE       R6      R1
          100 0.00167    OP_MUL        R5      :*      1
          100 0.00145    OP_ADD        R4      :+      1
          100 0.00148    OP_LOADL      R5      L(2)    ; 4.3
          100 0.00141    OP_MOVE       R6      R1
          100 0.00143    OP_LOADL      R7      L(1)    ; 2
          100 0.00850    OP_SEND       R6      :**     1
          100 0.00157    OP_MUL        R5      :*      1
          100 0.00152    OP_ADD        R4      :+      1
          100 0.00158    OP_GETUPVAR   R5      2       0
          100 0.00128    OP_MOVE       R6      R2
          100 0.00141    OP_MOVE       R7      R4
          100 0.00647    OP_SEND       R5      :[]=    2
          100 0.00209    OP_RETURN     R4      return
    0008 0.00000     end
    0009 0.00000
    0010 0.00004     var1
            1 0.00004    OP_RETURN     R2      return
    0011 0.00000 end
    0012 0.00000
    0013 0.00225 test_code
            1 0.00001    OP_LOADSELF   R1
            1 0.00005    OP_SEND       R1      :test_code      0
            1 0.00218    OP_STOP
    Fixnum#times 0.01822
         1 0.00002    OP_ENTER      0:0:0:0:0:0:1
         1 0.00001    OP_LOADSELF   R3
         1 0.00011    OP_SEND       R3      :block_given?   0
         1 0.00001    OP_JMPNOT     R3      002
         1 0.00001    OP_JMP                005
         0 0.00000    OP_LOADSELF   R3
         0 0.00000    OP_LOADSYM    R4      :times
         0 0.00000    OP_SEND       R3      :to_enum        1
         0 0.00000    OP_RETURN     R3      return
         1 0.00001    OP_LOADI      R2      0
         1 0.00001    OP_JMP                007
       100 0.00128    OP_MOVE       R3      R1
       100 0.00142    OP_MOVE       R4      R2
       100 0.00546    OP_SEND       R3      :call   1
       100 0.00128    OP_MOVE       R3      R2
       100 0.00164    OP_ADDI       R3      :+      1
       100 0.00128    OP_MOVE       R2      R3
       101 0.00144    OP_MOVE       R3      R2
       101 0.00138    OP_LOADSELF   R4
       101 0.00143    OP_LT R3      :<      1
       101 0.00138    OP_JMPIF      R3      -09
         1 0.00001    OP_LOADSELF   R3
         1 0.00002    OP_RETURN     R3      return

    From these outputs, we can see that the most expensive line in the program is line 7 which takes roughly 36 ms to execute over the course of this entire program. The '**' operator takes a large portion of that time (8.5 ms) and is executed 100 times as expected. The first '100.times' loop takes 1.8 ms + 9.5 ms + 18.2 ms (from the overhead of Fixnum#times itself). Within the output, line numbers, source code, VM instruction call counts, and VM instruction self times can be seen fairly clearly.

    Complications In Profiling

    The mruby-profiler gem makes profiling much easier, however there are a few limits with it’s current implementation.

    Entering and Exiting The MRuby VM

    The timers within mruby-profiler are relatively simple and in most code they tend to work very well. They do fail when interacting with any program which ends up doing a fair amount of work within C which ends up calling ruby methods from C. To elaborate on that, let’s first look at how mruby-profiler calculates how much time is spent at a given mruby opcode.

    The mruby-profiler gem uses the code fetch hook within the mruby VM. Every time an opcode is executed by the mruby VM the code fetch hook is called. MRuby-profiler records the time when an instruction is fetched. When the next fetch occurs the difference in time is assumed to be the amount of time spent in the previous instruction.

    Generally this model works well. For normal ruby code it’s completely accurate, for ruby calling simple C routines the appropriate amount of time is given to the OP_SEND instruction which lead to the C call, but it fails with a mixture of C/ruby calls. Consider the below sequence of events:

    1. C code calls one ruby method

    2. C code works on a long running task

    3. C code calls another ruby method

    During step 2 no ruby opcodes will be observed by mruby-profiler. Thus, when step 3 occurs and a new VM opcode is fetched, then all the time that was spent in step 2 is attributed to the last instruction in step 1. Typically the last instruction of a mruby method would be OP_RETURN. So, if you spot an OP_RETURN which is taking much more time than it should, be aware that it may be counting time spent in a calling C function.

    A lack of cumulative child function time

    In general I’d say having method/instruction time presented as 'self-time' is preferable in a reasonably well architected system. Self-time presents how much time could effectively be saved by optimizing the method by itself without considering the rest of the code it ends up calling.

    Self-time, however, can create a few problems with interpreting the results of mruby-profiler. If a hotspot function is called a large number of times, it can be tricky to backtrack which functions called it a significant number of times or with a particular type of data which took the hotspot function longer to evaluate. The lack of cumulative times also make it hard to evaluate if a particular function is 'expensive to call'. It is possible to have an 'expensive to call' function which does not use a significant amount of time with called functions which also do not use much time. If, however, there is a sufficiently deep call stack, then an innocuous function can still become very expensive in the cumulative sense (i.e. "death by a million cuts")

    The last issue is one that’s more ruby specific. I’d say it’s fair to say that if you enjoy ruby you use blocks…​ You use them a lot. For those of you unfamiliar with what a block is here’s a simple example:

    object.method do |optional_args|
        work_on_block
    end
    irep 0x9b433e0 nregs=3 nlocals=1 pools=0 syms=2 reps=1
    file: tmp.rb
        1 000 OP_LOADSELF   R1
        1 001 OP_SEND       R1      :object 0
        1 002 OP_LAMBDA     R2      I(+1)   2
        1 003 OP_SENDB      R1      :method 0
        1 004 OP_STOP
    
    irep 0x9b494a8 nregs=5 nlocals=3 pools=0 syms=1 reps=0
    file: tmp.rb
        1 000 OP_ENTER      1:0:0:0:0:0:0
        2 001 OP_LOADSELF   R3
        2 002 OP_SEND       R3      :work_on_block  0
        2 003 OP_RETURN     R3      return

    In the ruby code :method is called with the block (do..end). The block is encoded as a lambda function (the second irep) and method ends up calling the lambda. Just like in the very first example, with Fixnum#times, the cost involved with :method is associated with :method’s implementation and not the object.method call. When the block accepting method adds a significant amount of overhead it’s very easy to overlook using self-time and you should be aware of it when profiling. In the case of iteration, we’ll revisit block-method overhead later in this article.

    While at the moment mruby-profiler doesn’t present cumulative time information it does capture it from the MRuby VM (or at least it appeared to when I looked at the source). It’s not in the most convenient format, but at least the data is there, it’s just the analysis which needs to be updated (and that’s all ruby). The mruby-profiler gem is still missing the functionality, but I imagine it could be added reasonably easily in the future.

    Bugs

    Of course another limitation of using both mruby and mruby-profiler is that both of them are going to have more bugs than other tools with more widespread use. When I initially found mruby-profiler it tended to crash in a variety of ways for the codebase I was profiling (I’d recommend using the fixes I’ve proposed to mruby-profiler via a pull request if it hasn’t yet been merged). When I initially began using MRuby I encountered a few bugs, though MRuby has become a fair bit more stable over the past year. Lastly, while mruby-profiler does provide an output for kcachgrind, be aware that it is incomplete and there are some bugs in the output (though it is entirely unclear if they are from mruby’s debug info or a bug within mruby-profiler).

    Deciding when to move from ruby to C

    One of the great things about MRuby (and one of the major reasons why I’ve used it for a sizable user interface project) is that it’s extremely easy to move a routine from ruby to C. Of course it’s still easier to leave code as ruby (otherwise I would have just written everything in C), so what tasks does ruby struggle with?

    Heavy numerical tasks

    Like most interpreted languages, heavy mathematical operations aren’t 'fast'. Of course, they may be fast enough, but major gains can be made by using a compiled and heavily optimized language like C.

    Consider the below ruby code:

    def func(array)
      array.map do |x|
        Math.abs(Math.sin(x + 2 + 7 * 8 + 3))
      end
    end
    void func(float *f, int len)
    {
        for(int i=0; i<len; ++i)
            f[i] = fabsf(sinf(f[i] + 2 + 7 * 8 + 3));
    }
    irep 0x8d24b40 nregs=5 nlocals=3 pools=0 syms=1 reps=1
    file: tmp.rb
        1 000 OP_ENTER      1:0:0:0:0:0:0
        2 001 OP_MOVE       R3      R1              ; R1:array
        2 002 OP_LAMBDA     R4      I(+1)   2
        2 003 OP_SENDB      R3      :map    0
        2 004 OP_RETURN     R3      return
    
    irep 0x8d24b90 nregs=9 nlocals=3 pools=0 syms=5 reps=0
    file: tmp.rb
        2 000 OP_ENTER      1:0:0:0:0:0:0
        3 001 OP_GETCONST   R3      :Math
        3 002 OP_GETCONST   R4      :Math
        3 003 OP_MOVE       R5      R1              ; R1:x
        3 004 OP_ADDI       R5      :+      2
        3 005 OP_LOADI      R6      7
        3 006 OP_LOADI      R7      8
        3 007 OP_MUL        R6      :*      1
        3 008 OP_ADD        R5      :+      1
        3 009 OP_ADDI       R5      :+      3
        3 010 OP_SEND       R4      :sin    1
        3 011 OP_SEND       R3      :abs    1
        3 012 OP_RETURN     R3      return
    func:
    .LFB9:
        .cfi_startproc
        movl    4(%esp), %ecx
        movl    8(%esp), %edx
        testl   %edx, %edx
        jle .L1
        movl    %ecx, %eax
        leal    (%ecx,%edx,4), %edx
        flds    .LC0
    .L3:
        fld %st(0)
        fadds   (%eax)
        fsin
        fabs
        fstps   (%eax)
        addl    $4, %eax
        cmpl    %edx, %eax
        jne .L3
        fstp    %st(0)
    .L1:
        ret

    The ruby VM instructions are reasonably quick, but when comparing 12 ruby opcodes to 4 assembly instructions (fld..fstps), it’s rather obvious that there’s going to be a pretty reasonable difference in speed. MRuby isn’t going to simplify any of the math that you supply it with and each opcode (simple or not) is going to take a fair bit longer than a single assembly instruction.

    Heavy Member Access

    Accessing data stored in mruby classes isn’t all that cheap even for simple attributes. Each member access in idiomatic ruby results in a method call via OP_SEND. Evaluating each method call is relatively expensive compared to other opcodes and each call tends to involve a setup phase for the arguments of each method. In comparison for C to access member variables it’s as simple as fetching memory at an offset to the base of the structure.

    class Y
        attr_accessor :a, :b, :c, :d
    end
    
    def func(array, y)
      array.map do |x|
        x + y.a + y.b * y.c + y.d
      end
    end
    struct Y {
        float a, b, c, d;
    };
    
    void func(float *f, int len, struct Y y)
    {
        for(int i=0; i<len; ++i)
            f[i] = f[i] + y.a + y.b * y.c + y.d;
    }
    irep 0x94feb90 nregs=6 nlocals=4 pools=0 syms=1 reps=1
    file: tmp.rb
        5 000 OP_ENTER      2:0:0:0:0:0:0
        6 001 OP_MOVE       R4      R1              ; R1:array
        6 002 OP_LAMBDA     R5      I(+1)   2
        6 003 OP_SENDB      R4      :map    0
        6 004 OP_RETURN     R4      return
    
    irep 0x9510068 nregs=7 nlocals=3 pools=0 syms=6 reps=0
    file: tmp.rb
        6 000 OP_ENTER      1:0:0:0:0:0:0
        7 001 OP_MOVE       R3      R1              ; R1:x
        7 002 OP_GETUPVAR   R4      2       0
        7 003 OP_SEND       R4      :a      0
        7 004 OP_ADD        R3      :+      1
        7 005 OP_GETUPVAR   R4      2       0
        7 006 OP_SEND       R4      :b      0
        7 007 OP_GETUPVAR   R5      2       0
        7 008 OP_SEND       R5      :c      0
        7 009 OP_MUL        R4      :*      1
        7 010 OP_ADD        R3      :+      1
        7 011 OP_GETUPVAR   R4      2       0
        7 012 OP_SEND       R4      :d      0
        7 013 OP_ADD        R3      :+      1
        7 014 OP_RETURN     R3      return
    func:
    .LFB0:
        .cfi_startproc
        movl    4(%esp), %ecx
        movl    8(%esp), %edx
        testl   %edx, %edx
        jle .L1
        flds    24(%esp)
        fadds   12(%esp)
        flds    20(%esp)
        fmuls   16(%esp)
        faddp   %st, %st(1)
        movl    %ecx, %eax
        leal    (%ecx,%edx,4), %edx
    .L3:
        fld %st(0)
        fadds   (%eax)
        fstps   (%eax)
        addl    $4, %eax
        cmpl    %edx, %eax
        jne .L3
        fstp    %st(0)
    .L1:
        ret

    The mruby VM is fast, but when dealing with these member variable references the overhead adds up. The relatively basic loop results in a 15 opcode body. Of those opcodes, 4 are method calls, and 4 involve setting up the method calls. C doesn’t even need a separate instruction to fetch the values due to the addressing modes that x86 provides. Additionally, C can recognize that the member variables are constant and calculate their affect once outside the loop body. That leaves 3 (fld..fstps) instructions in the loop body for a tight C loop.

    Loops of any sort

    Actually, going one step beyond, loops over large amounts of data are just bad for performance in MRuby due to the overhead introduced by Array#each/Array#map/etc.

    $dummy = 0
    def func(array)
      array.each do |x|
        $dummy = x
      end
    end
    volatile int dummy;
    void func(int *x, int len)
    {
        for(int i=0; i<len; ++i)
            dummy = x[i];
    }
    irep 0x8fa1b40 nregs=5 nlocals=3 pools=0 syms=1 reps=1
    file: tmp.rb
        2 000 OP_ENTER      1:0:0:0:0:0:0
        3 001 OP_MOVE       R3      R1              ; R1:array
        3 002 OP_LAMBDA     R4      I(+1)   2
        3 003 OP_SENDB      R3      :each   0
        3 004 OP_RETURN     R3      return
    
    irep 0x8fa1b90 nregs=4 nlocals=3 pools=0 syms=1 reps=0
    file: tmp.rb
        3 000 OP_ENTER      1:0:0:0:0:0:0
        4 001 OP_SETGLOBAL  :$dummy R1              ; R1:x
        4 002 OP_RETURN     R1      return  ; R1:x
    func:
        .cfi_startproc
        testl   %esi, %esi
        jle .L1
        movl    $0, %eax
    .L3:
        movl    (%rdi,%rax,4), %edx
        movl    %edx, dummy(%rip)
        addq    $1, %rax
        cmpl    %eax, %esi
        jg  .L3
    .L1:
        rep ret

    In previous examples the loop overhead for C was neglected. For this example, 3 instructions: addq, cmpl, and jg are the overhead per loop iteration. Ruby’s is hidden in the :each method of the container. If you’re in an optimizing mindset, you might think that since :each is the idiomatic way to build loops in ruby it would be built to limit overhead.

    Nope:

    class Array
      def each(&block)
        return to_enum :each unless block_given?
    
        idx, length = -1, self.length-1
        while idx < length and length <= self.length and length = self.length-1
          elm = self[idx += 1]
          unless elm
            if elm.nil? and length >= self.length
              break
            end
          end
          block.call(elm)
        end
        self
      end
    end

    Translating that to VM instructions results in 25 opcodes overhead per loop with 4 method calls (:[], :call, :length, :length). Ouch…​

    So, Array#each/Array#map/etc have a lot of overhead when you get to optimizing. What about other types of loops? A standard for loop is just an alias to :each. A while loop however avoids much of the setup and per iteration cost.

    $dummy = 0
    def func(array)
      for x in 0...array.length do
        $dummy = array[x]
      end
    end
    irep 0x904db40 nregs=7 nlocals=4 pools=0 syms=2 reps=1
    file: tmp.rb
        2 000 OP_ENTER      1:0:0:0:0:0:0
        3 001 OP_LOADI      R4      0
        3 002 OP_MOVE       R5      R1              ; R1:array
        3 003 OP_SEND       R5      :length 0
        3 004 OP_RANGE      R4      R4      1
        3 005 OP_LAMBDA     R5      I(+1)   2
        3 006 OP_SENDB      R4      :each   0
        3 007 OP_RETURN     R4      return
    
    irep 0x904db90 nregs=5 nlocals=1 pools=0 syms=2 reps=0
    file: tmp.rb
        3 000 OP_ENTER      1:0:0:0:0:0:0
        3 001 OP_SETUPVAR   R1      3       0
        4 002 OP_GETUPVAR   R2      1       0
        4 003 OP_GETUPVAR   R3      3       0
        4 004 OP_SEND       R2      :[]     1
        4 005 OP_SETGLOBAL  :$dummy R2
        4 006 OP_RETURN     R2      return
    $dummy = 0
    def func2(array)
      itr = 0
      n   = array.length
      while itr < n
        $dummy = array[itr]
        itr += 1
      end
    end
    irep 0x90fbb40 nregs=8 nlocals=5 pools=0 syms=5 reps=0
    file: tmp.rb
        2 000 OP_ENTER      1:0:0:0:0:0:0
        3 001 OP_LOADI      R3      0               ; R3:itr
        4 002 OP_MOVE       R5      R1              ; R1:array
        4 003 OP_SEND       R5      :length 0
        4 004 OP_MOVE       R4      R5              ; R4:n
        5 005 OP_JMP                013
        6 006 OP_MOVE       R5      R1              ; R1:array
        6 007 OP_MOVE       R6      R3              ; R3:itr
        6 008 OP_SEND       R5      :[]     1
        6 009 OP_SETGLOBAL  :$dummy R5
        7 010 OP_MOVE       R5      R3              ; R3:itr
        7 011 OP_ADDI       R5      :+      1
        7 012 OP_MOVE       R3      R5              ; R3:itr
        5 013 OP_MOVE       R5      R3              ; R3:itr
        5 014 OP_MOVE       R6      R4              ; R4:n
        5 015 OP_LT R5      :<      1
        5 016 OP_JMPIF      R5      006
        5 017 OP_LOADNIL    R5
        5 018 OP_RETURN     R5      return

    There’s still plenty of register shuffling with the while loop, but 6-7 opcodes of overhead per iteration and only one method call (:[]) sure beats 25 extra opcodes per loop. So, if you want to keep a hotspot in pure ruby, you might have more luck with inlining a while loop or using :each_index which is a middle ground in terms of cost.

    Conclusions

    MRuby is reasonably fast, though once you’re ploughing through enough data and code, then hotspots slowing things down are basically inevitable. MRuby makes it pretty easy to patch up these hotspots by seamlessly implementing the methods in C. To keep a codebase quick you still need to find hotspots in code and tools like mruby-prof make this task a lot easier. I hope this ramble about using mruby-profiler is helpful.

    February 14, 2017 05:00 AM

    February 12, 2017

    Libre Music Production - Articles, Tutorials and News

    Ardour 5.6 released

    Ardour 5.6 released

    Ardour 5.6 has just been released. This latest version brings with it lots of bug fixes, refinements and GUI enhancements.  

    The main visual change you'll notice in this release is the new toolbar layout. This has been rearranged to make better use of vertical space, also adding a new mini-timeline.

    by Conor at February 12, 2017 10:11 AM

    February 11, 2017

    ardour

    Ardour 5.6 released

    Another two months of development has rolled by, involving more than 600 commits by developers, and it's time for us to release Ardour 5.6. Although there are no major new features in this release, there is the usual list of dozens of bug fixes major and minor, plus some workflow and GUI enhancements. There has been a significant rearrangement of the transport bar to try to use space more efficiently and effectively. The new design also permits session navigation while using the Mixer tab, and there are numerous optionally visible elements. Similarly, the Preferences dialog was rearranged to try to make it easier to find and browse the many, many available options. Other interesting new features: session archiving, a new General MIDI default synth for MIDI tracks, and direct and immediate control of routing for heavily multichannel (typically multitimbral) synth plugins.

    Download  

    Read more below for the full details ....

    read more

    by paul at February 11, 2017 09:40 PM

    February 08, 2017

    open-source – CDM Create Digital Music

    Turn a terrible toy turntable from a supermarket into a scratch deck

    Well, this is probably the world’s cheapest DVS [digital vinyl system]. The reader here got the deck for £14; retail is just £29.99. Add a Raspberry Pi in place of the computer, a display and some adapters, and you have a full-functioning DJ system. For real.

    Daniel James tells us the full story. My favorite advice – and I agree – don’t buy this record player. It really is that awful. But it does prove how open source tools can save obsolete gear from landfills – and says to me, too, that there’s really no reason digital vinyl systems still need to lean on conventional computer hardware.

    Now – on with the adventures at Aldi. The necessary gear:

    1. A terrible turntable (EnVivo USB Turntable in this case)
    2. PiDeck. (See the official project page. That means a recent Raspberry Pi and SD card.
    3. Control vinyl – Serato here.
    4. Audio interface. Since the USB connection in this case was unusable, the author chose an audioinjector, crowd-funded hardware available now for about £20.

    Daniel (of awesome 64studio Linux audio expertise fame) writes:

    I was looking to find the worst deck in the world, and I think I found it. The EnVivo USB Turntable retails for £29.99 at Aldi, a supermarket. I paid £14 for mine brand-new and boxed, at auction. I wanted to find out for myself just how badly these plastic decks were built, as my neighbours have similar models, and the sound from the analogue line-out is sucktacular. Really, don’t bother if you intended to use this deck for its stated purpose of digitising your vinyl collection.

    There are more expensive versions available under various brand names with deluxe leatherette cases or built-in speakers, but the deck inside looks the same. What would we reasonably expect at this price, given that it shipped all the way from China? Ed.: uh…. heh, well, that’s true of pretty much everything else, too, let’s say more to the point it’s some of the cheapest turntable hardware to ship from China.

    Inside, there are very few components; these decks appear to be an experiment in just how cheap you can make something and still have people buy it. The straight tonearm has no bearing, it simply pivots
    loosely in a plastic sleeve. There is no counterweight or anti-skating adjustment, just a spring underneath the deck pulling the stylus towards the record. The platter is undersized for a 12″, and so is the spindle. Records playing off-centre must add extra vintage charm, they figured.

    A 12″ hip-hop tune would not play on the brand-new deck, as the kick drum hits bounced the stylus right out of the groove every other second. The analogue audio output lacked any meaningful bass, too. Then I tried a 12″ Serato CV02 timecode with the PiDeck, and things started to look up. With the control vinyl’s pilot tone containing little or no bass energy, the stylus tracked fine.

    Then, I popped out the three rubber nipples from the platter which are all that serves as isolation from motor vibration, put tape around the spindle to make it regulation diameter, and dropped on a slipmat. With the control vinyl on the deck again, it started working as well as most turntables with little torque, but took scratches and backspins in its stride. The USB interface does not have enough headroom for backspins without distortion of the timecode, so I used the line-out RCA sockets instead. No pre-amp is required to hook up an audioinjector.net stereo card for the Raspberry Pi, and this far superior audio interface created by Matt Flax takes care of the output to the mixer.

    The spring-loaded plastic tonearm will even work with the deck held at an angle, which previously I had only seen achieved with the straight tonearm Vestax decks. Maybe a 10″ Serato vinyl and slipmat would be a better fit. With a pitch control, these decks would have everything you need to get started DJing. How long they will last in use is anyone’s guess, and you are heavy-handed on the platter, you will probably burn out the tiny motor. The stylus is at least replaceable.

    Next time you’re at the supermarket, please, do not buy one of these cruddy decks; the world has enough plastic trash already. However if you happen to own one, or found one in a dumpster: one, two, you know what to do!

    Previously: PiDeck makes a USB stick into a free DJ player, with turntables

    More: http://pideck.com/

    The post Turn a terrible toy turntable from a supermarket into a scratch deck appeared first on CDM Create Digital Music.

    by Peter Kirn at February 08, 2017 06:38 PM

    February 07, 2017

    Linux – CDM Create Digital Music

    Get the sound of an abandoned US surveillance tower, free

    Over fifty years ago, it was built in West Berlin atop a mountain of rubble to listen in on the Communists in the East. And now, the infamous Teufelsberg UA National Security Agency tower can lend its cavernous sound to your tracks. It’s available as a free plug-in for Mac, Windows, and even Linux, and it’s open source.

    Someone found this idea appealing already, as the impulse samples we wrote about previously became the creators’ most popular download.

    But now, you get a plug-in you can drop in your host. It’s actually a pretty nice array of stuff here:

    balance-audio-tools-teufelsberg-reverb-plugin

    Lush reverbs, accurately captured at the infamous Berlin surveillance tower.
    6 different IR reverb sounds.
    Fast, zero-latency convolution.
    A/B compare and preset saving functions
    Linux, Windows & Mac downloads.
    Free and open source.

    Oh yeah, and if you happen to be a developer, this is a brilliant example. It shows how to build a simple effect plug-in and how to do convolution and how to work with the JUCE framework.

    https://github.com/johnflynnjohnflynn/BalanceSPTeufelsbergReverb

    Here’s a look inside the facility (as linked in our previous story):

    Download:

    http://www.balancemastering.com/blog/balance-audio-tools-free-teufelsberg-reverb-plugin/

    The post Get the sound of an abandoned US surveillance tower, free appeared first on CDM Create Digital Music.

    by Peter Kirn at February 07, 2017 10:47 PM

    February 01, 2017

    GStreamer News

    GStreamer 1.10.3 stable release (binaries)

    Pre-built binary images of the 1.10.3 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    See /releases/1.10/ for the full list of changes.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    February 01, 2017 08:40 AM

    Libre Music Production - Articles, Tutorials and News

    Open Stage Control v0.16.0 released

    Open Stage Control v0.16.0 released

    Open Stage Control has just seen a new release. Open Stage Control is a libre desktop OSC bi-directional control surface application built with HTML, JavaScript & CSS and run as a Node / Electron web server that accepts any number of Chrome / Chromium / Electron clients.

    Features include -

    by Conor at February 01, 2017 07:37 AM

    January 31, 2017

    Libre Music Production - Articles, Tutorials and News

    New overdrive stompbox plugin, GxSD1 LV2 released

    New overdrive stompbox plugin, GxSD1 LV2 released

    Hermann Meyer has just released another LV2 plugin, GxSD1. As the name suggests, this is based on the Boss SD1 overdrive pedal.

    This is just one of many stompbox emulations that Hermann has been working on lately. He has also set up a github repository with all these plugins, so you no longer have to build them one at a time. Currently this repository contains the following plugins -

    by Conor at January 31, 2017 08:48 PM

    January 30, 2017

    GStreamer News

    GStreamer 1.10.3 stable release

    The GStreamer team is pleased to announce the third bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

    This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

    See /releases/1.10/ for the full release notes.

    Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

    January 30, 2017 02:25 PM

    Libre Music Production - Articles, Tutorials and News

    LSP Plugins 1.0.20 released

    LSP Plugins 1.0.20 released

    Vladimir Sadovnikov has just released version 1.0.20 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

    This release includes the following new plugins and changes -

    by Conor at January 30, 2017 10:09 AM

    LMP Asks #22: An interview with Gianfranco Ceccolini

     LMP Asks #22: An interview with Gianfranco Ceccolini

    This time we talk to Gianfranco Ceccolini, the brains behind multi effects pedal, MOD, which runs on Linux and other FLOSS software.

    Hi Gianfranco, thank you for taking the time to do this interview. Where do you live, and what do you do for a living?

    I currently live in Berlin, Germany but I am originally from São Paulo, Brazil.

    I am the founder and current CEO of MOD Devices.

    by Conor at January 30, 2017 09:01 AM

    January 25, 2017

    Libre Music Production - Articles, Tutorials and News

    Synthesizing a drumkit with ZynAddSubFX

    Synthesizing a drumkit with ZynAddSubFX

    unfa has published an extensive video tutorial on how to synthesize a kick drum, a snare drum, a hihat and a crash cymbal using the softsynth ZynAddSubFX.

    You can also download the ZynAddSubFX patches and the Ardour 5 session.

    by admin at January 25, 2017 04:32 PM

    January 24, 2017

    OSM podcast

    January 21, 2017

    digital audio hacks – Hackaday

    DreamBlaster X2: A Modern MIDI Synth for Your Sound Blaster Card

    Back in the 90s, gamers loaded out their PCs with Creative’s Sound Blaster family of sound cards. Those who were really serious about audio could connect a daughterboard called the Creative Wave Blaster. This card used wavetable synthesis to provide more realistic instrument sounds than the Sound Blaster’s on board Yamaha FM synthesis chip.

    The DreamBlaster X2 is a modern daughterboard for Sound Blaster sound cards. Using the connector on the sound card, it has stereo audio input and MIDI input and output. If you’re not using a Sound Blaster, a 3.5 mm jack and USB MIDI are provided. Since the MIDI uses TTL voltages, it can be directly connected to an Arduino or Raspberry Pi.

    This card uses a Dream SAM5000 series DSP chip, which can perform wavetable synthesis with up to 81 polyphonic voices. It also performs reverb, chorus, and equalizer effects. This chip sends audio data to a 24 bit DAC, which outputs audio into the sound card or out the 3.5 mm jack.

    The DreamBlaster X2 also comes with software to load wavetables, and wavetables to try out. We believe it will be the best upgrade for your 486 released in 2017. If you’re interested, you can order an assembled DreamBlaster. After the break, a review with audio demos.


    Filed under: digital audio hacks

    by Eric Evenchick at January 21, 2017 12:01 PM

    January 17, 2017

    GStreamer News

    GStreamer 1.11.1 unstable release (binaries)

    Pre-built binary images of the 1.11.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    January 17, 2017 05:45 AM

    January 16, 2017

    open-source – CDM Create Digital Music

    Send MIDI messages faster than ever, right from the command line

    Quick! Send a MIDI control change message! Or some obscure parameter!

    Well, sometimes typing something is the easiest way to do things. And that’s why Geert Bevin’s new, free and open source tool SendMIDI is invaluable. Sorry to nerd out completely here, but I suspect this is going to be way more relevant to my daily life than anything coming out of NAMM this week.

    In this case, whether you know much about how to use a command line or not, there’s almost certainly no faster way of performing basic MIDI tasks. Anyone working with hardware is certain to want one. (Someone I suspect will make their own little standalone MIDI tool by connecting a Raspberry Pi to a little keyboard and carry it around like a MIDI terminal.)

    The commands are simple and obvious and easy to remember once you try them. Installation is dead-simple. Every OS is supported – build it yourself, install with Homebrew on macOS, or – the easiest method – grab a pre-built binary for Windows, Mac, or Linux.

    And now with version 1.0.5, the whole thing is eminently usable and supports more or less the entire MIDI spec, minus MIDI Time Code (which you wouldn’t want to send this way anyway).

    So, now troubleshooting, sending obscure parameter changes, and other controls is simpler than ever. It’s a must for hardware lovers.

    Developers, that support for all operating systems is also evidence of how easy the brilliant open source C++ JUCE framework makes building. The ProJucer tool does all the magic. “But wait, I thought JUCE was for making ugly non-native GUIs,” I’m sure some people are saying. No, actually, that’s wrong on two counts. One, JUCE doesn’t necessarily have anything to do with GUIs; it’s a full-featured multimedia framework focused on music, and this tool shows your end result might not have a GUI at all. Two, if you’ve seen an ugly UI, that’s the developer’s fault, not JUCE’s – and very often you’ve seen beautiful GUIs built in JUCE, but as a result didn’t know that’s how they were built.

    But anyone should grab this, seriously.

    https://github.com/gbevin/SendMIDI

    The post Send MIDI messages faster than ever, right from the command line appeared first on CDM Create Digital Music.

    by Peter Kirn at January 16, 2017 04:30 PM

    Libre Music Production - Articles, Tutorials and News

    January 12, 2017

    GStreamer News

    GStreamer 1.11.1 unstable release

    The GStreamer team is pleased to announce the first release of the unstable 1.11 release series. The 1.11 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework. The unstable 1.11 release series will lead to the stable 1.12 release series in the next weeks. Any newly added API can still change until that point.

    Full release notes will be provided at some point during the 1.11 release cycle, highlighting all the new features, bugfixes, performance optimizations and other important changes.

    Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

    January 12, 2017 03:00 PM

    January 11, 2017

    Scores of Beauty

    Working With Word

    I have regularly described the advantages of working with LilyPond and version control on this blog (for example in an ironic piece on things one would “miss” most with version control), but this has always been from the perspective of not having to use tools like Finale or MS Word. Now I have been hit by such a traditional workflow and feel the urge to comment on it from that perspective …

    Recently I had to update a paper presentation for the printed publication, and for that purpose I was given a document with editorial guidelines. Of course I wasn’t surprised that a Word document was expected, but some of the other regulations struck me as particularly odd. Why on earth should I use a specific font family and size for body text and another specific size for footnotes? The Word document won’t be printed directly, so my font settings most surely won’t end up in the final PDF anyway because they are superseded by some style definition in some professional DTP program. Well, I could of course simply ignore that as it doesn’t actually interfere with my work. But it is such a typical symptom of academics not having properly reflected on their tools that I can’t help but make that comment.

    Of course there are many reasonable rules, for example about abbreviations, but I must say when guidelines ask me to use italics for note names and super/subscript (without italics) for octave indications I feel almost personally insulted. Wow! All the authors of this compiled volume are expected to keep their formatting consistent, without even resorting to style sheets. I really wouldn’t want to be the typesetting person having to sort out that mess. What if I wanted to discern between different types of highlighting and give them differentiated appearance, for example by printing author names in small caps and note names in a semibold font variant? And what about consistently applying or not applying full and half spaces around hyphens, or rather around hyphens, en-dashes or em-dashes (hm, who is going to sort this out, given that many authors don’t know the difference and simply let their word processor do the job)?
    But guess what? When I complained and suggested to at least provide a Word template with all the style sheets prepared the reply I got was one of unashamed perplexity – they didn’t really understand what I was talking about. I would like to stress that I do not mean this personally (in case someone realizes I’m talking about them …), but this was a moment that got me thinking: If I have to “professionally” discuss these matters on such a basic level how could I ever hope to get musicologists to embrace the world of text based music editing?

    Well, I forced myself to stay calm and write the essay in LibreOffice (although I plead guilty to using stylesheets anyway). Just as expected, writing without being able to properly manage the document with Git feels really awkward. Being able to quickly switch “sessions” to work on another part or just to draft around without spoiling the previous state has become a second nature to me, and I really missed that. On the other hand I have to admit that being able to retrace one’s steps or selectively undoing arbitrary edits is something one rarely needs in practice. But knowing it would be possible really makes a difference and isn’t just a fancy or even nerdy idea. Think of the airbags built into your car: you will rarely if ever actually “use” them, but you wouldn’t want to drive without anymore once you have got accustomed to this level of safety. But eventually I completed the text and submitted the Word document, giving things out of hand and letting “them” deal with the crappy stuff. Unfortunately things weren’t over yet.

    After having submitted my paper I learned about another paper going into the proceedings that I would like to cross-reference. So I asked the editor if I could update my paper and he said yes, there’s still time for that. But now I’m somewhat at a loss because I ended up in exactly the situation I’ve always made fun of. What if I simply send an updated document? Oops, I can’t tell if the editor has already modified my initial submission – in that case he’d have to manually identify the changes and apply them to his copy. And maybe he’d even have to adjust my updates if they should conflict with any changes he has already made. Ah, word processors have this “trace changes” option (or whatever it’s called in English programs), shouldn’t this solve the issue? Hm, partially: I have to make sure that I only apply these changes in this session so the editor has a chance to identify them properly. Then he still has to apply them manually, so this approach still includes a pretty high risk of errors happening. OK, maybe I should simply describe the intended changes in an email? Oh my, this requires me to actually keep track of the changes myself so I can later refer to the “original” state in my message. Probably I’ll have to look into that original document state, maybe from the “sent” box of my email client? And all these options don’t take into account that when the document will eventually be considered final the editor will be sitting on a pile of copies and has to be particularly careful which version to pass on to the typesetter …

    Oh my, life as an author is just so much infinitely easier when your documents can healthily evolve under version control and when you can discuss and apply later edits through branches and pull requests, pointing others to exactly the changes you have applied. Of course the comparison is as lame as any comparison but to me having to develop a text with Word feels like being booked for a piano recital, showing up in the venue and instead of a Steinway D finding this on stage:


    In about a month I will have to submit yet another Word document for proceedings of another conference. I realize that I won’t change the world – at least not immediately – and this awkward request will come again. But I’m not exactly looking forward to repeating the experience of being restricted to Word, so I think I will definitely take the plunge and find a proper solution this time. Pandoc will allow me to write the text in its extended Markdown syntax and just export to Word when I’m ready to submit. Until then I have all the benefits of plain text, version control above all, but also the option of editing the document through the web interface and all the other advantages of semantic markup. Maybe I can even convince the (next) editor to stick to the Markdown representation for longer so we can do the review in Gitlab and only convert the Markdown to Word for the typesetter? Oh, wait: Pandoc should be able to export the Markdown to something the typesetter can directly import into InDesign, which would allow us (well, me) to completely avoid the intermediate step of a Word document.

    Wouter Soudan makes a strong point about this in his post From Word to Markdown to InDesign. His workflow is actually the other way round as it is looked at from the typesetter’s perspective: he uses Markdown as an intermediate state to clean up the potentially messy Word documents submitted by authors and to create a smooth pipeline to get things into InDesign – or optionally LaTeX.

    I have only scratched the surface so far, preparing the script and slides for a (yet another) presentation together with Jan-Peter Voigt. He has set up a toolchain using Pandoc to convert our Markdown content files to PDF piped through LaTeX. I will definitely investigate this path further as it feels really good and is efficient, and maybe this can be expanded to a smooth environment for authoring texts including music examples. Stay tuned for any further reports …

    by Urs Liska at January 11, 2017 12:00 PM

    January 09, 2017

    aubio

    0.4.4 released

    A new version of aubio, 0.4.4, is available.

    This version features a new log module that allows redirecting errors, warnings, and other messages coming from libaubio. As usual, these messages are printed to stderr or stdout by default.

    Another new feature is the --minioi feature added to aubioonset, which lets you adjust the minimum Inter-Onset Interval (IOI) separating two consecutive events. This makes it easier to reduce the number of doubled detections.

    New demos have been added to the python/demos folder, including one using the pyaudio module to read samples from the microphone in real time.

    0.4.4 also comes with a bunch of fixes, including typos in the documentation, build system improvements, optimisations, and platform compatibility.

    read more after the break...

    January 09, 2017 03:35 PM

    January 07, 2017

    The Penguin Producer

    Composition in Storytelling

    During the “Blender for the 80s” series, I went into some of the basics of visual composition.  In and of itself, it does well enough to give one a basic glimpse, but it’s really important to understand composition in and of itself. Composition is a key element to any visual …

    by Lampros Liontos at January 07, 2017 07:00 AM

    January 05, 2017

    KXStudio News

    Carla 2.0 beta5 is here!

    Hello again everyone, we're glad to bring you the 5th beta of the upcoming Carla 2.0 release.
    It has been more than 1 year since the last Carla release, this release fixes things that got broken in the mean time and continues the work towards Carla's 2.0 base features.
    There's quite a lot of changes under the hood, mostly bugfixes and minor but useful additions.
    With that being said, here are some of the highlights:

    carla-control

    Carla-Control is back!

    Carla-Control is an application to remotely control a Carla instance via network, using OSC messages.
    It stopped working shortly after Carla's move to 2.x development, but now it's back, and working a lot better.
    Currently works on Linux and Mac OS.


    logs-tab

    Logs tab

    This was also something that was brought back in this release.
    It was initially removed from the 2.x series because it did not work so well.
    Now the code has been fixed up and brought to life.

    You can disable it in the settings if you prefer your messages to go to the console as usual.
    Sadly this does not work on Windows just yet, only for Linux and Mac OS.
    But for Windows a Debug/Carla.exe file is included in this build (after you extract the exe as zip file), which can be used to see the console window.


    midi-pattern

    MIDI Sequencer is dead, long live MIDI Pattern!

    The internal MIDI Sequencer plugin was renamed to MIDI Pattern, and received some needed attention.
    Some menu actions and parameters were added, to make it more intuitive to use.
    It's now exported as part of the Carla-LV2 plugins package, and available for Linux and Mac OS.


    More stuff

    • Add carla-jack-single/multi startup tools
    • Add 16 channel and 2+1 (sidechain) variant to Carla-Patchbay plugins
    • Add new custom menu when right-clicking empty rack & patchbay areas
    • Add command-line option for help and version arguments
    • Add command-line option to run Carla without UI (requires project file)
    • Add X11 UI to Carla-LV2
    • Remove MVerb internal plugin (conflicting license)
    • Remove Nekofilter internal plugin (use fil4.lv2 instead)
    • Implement plugin bridges for Mac OS and Windows
    • Implement Carla-LV2 MIDI out
    • Implement initial latency code, used for aligned dry/wet sound for now
    • Implement support for VST shell plugins under Linux
    • Implement sorting of LV2 scale points
    • Allow to scan and load 32bit AUs under Mac OS
    • Allow using the same midi-cc in multiple parameters for the same plugin
    • Allow Carla-VST to be built with Qt5 (Linux only)
    • Bypass MIDI events on carla-rack plugin when rack is empty
    • Find plugin binary when saved filename doesn't exist
    • Force usage of custom theme under Mac OS
    • New option to wherever put UIs on top of carla (Linux only)
    • Make canvas draggable with mouse middle-click
    • Make it possible to force-refresh scan of LV2 and AU plugins
    • Plugin settings (force stereo, send CC, etc) are now saved in the project file
    • Renaming plugins under JACK driver mode now keeps the patchbays connections
    • Update modgui code for latest mod-ui, supports control outputs now
    • Lots and lots of bug fixes.

    There will still be 1 more beta release before going for a release candidate, so expect more cool stuff soon!

    Special Notes

    • Carla as plugin is still not available under Windows, to be done for the next beta.

    Downloads

    To download Carla binaries or source code, jump into the KXStudio downloads section.
    If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
    Bug reports and feature requests are welcome! Jump into the Carla's Github project page for those.

    by falkTX at January 05, 2017 10:23 PM

    January 04, 2017

    drobilla.net

    Jalv 1.6.0

    jalv 1.6.0 has been released. Jalv is a simple but fully featured LV2 host for Jack which exposes plugin ports to Jack, essentially making any LV2 plugin function as a Jack application. For more information, see http://drobilla.net/software/jalv.

    Changes:

    • Support CV ports if Jack metadata is enabled (patch from Hanspeter Portner)
    • Fix unreliable UI state initialization (patch from Hanspeter Portner)
    • Fix memory error on preset save resulting in odd bundle names
    • Improve preset support
    • Support numeric and string plugin properties (event-based control)
    • Support thread-safe state restoration
    • Update UI when internal plugin state is changed during preset load
    • Add generic Qt control UI from Amadeus Folego
    • Add PortAudio backend (compile time option, audio only)
    • Set Jack port order metadata
    • Allow Jack client name to be set from command line (thanks Adam Avramov)
    • Add command prompt to console version for changing controls
    • Add option to print plugin trace messages
    • Print colorful log if output is a terminal
    • Exit on Jack shutdown (patch from Robin Gareus)
    • Report Jack latency (patch from Robin Gareus)
    • Exit GUI versions on interrupt
    • Fix semaphore correctness issues
    • Use moc-qt4 if present for systems with multiple Qt versions
    • Add Qt5 version

    by drobilla at January 04, 2017 05:24 PM

    Lilv 0.24.2

    lilv 0.24.2 has been released. Lilv is a C library to make the use of LV2 plugins as simple as possible for applications. For more information, see http://drobilla.net/software/lilv.

    Changes:

    • Fix saving state to paths that contain URI delimiters (#, ?, etc)
    • Fix comparison of restored states with paths

    by drobilla at January 04, 2017 04:48 PM

    January 03, 2017

    Linux – CDM Create Digital Music

    New tools for free sound powerhouse Pd make it worth a new look

    Pure Data, the free and open source cousin of Max, can still learn some new tricks. And that’s important – because there’s nothing that does quite what it does, with a free, visual desktop interface, permissive license, and embeddable and mobile versions integrated with other software, free and commercial alike. A community of some of its most dedicated developers and artists met late last year in the NYC area. What transpired offers a glimpse of how this twenty-year-old program might enter a new chapter – and some nice tools you can use right now.

    To walk us through, attendee Max Neupert worked with the Pdcon community to contribute this collaborative report.

    pdprogram

    3dglasses

    For many participants, it was an epiphany of sorts. Finally, they met the people face-to-face whom they only knew by a nickname or acronym from the [Pd] forum or the infamous mailinglist.

    In 2016, we’ve finally seen a new edition of the semi-regular Pure Data Convention. Co-hosted by Stevens Institute of Technology in Hoboken, NJ and New York University in Manhattan, the event packed six days with workshops, concerts, exhibitions, and peer-reviewed paper/poster presentations.

    Pure Data, (or Pd for short) is an ever-growing patcher programming language for audio, video and interaction. With 20 years under its belt, Pure Data is not the newest kid in town, but it’s built specifically with the idea of preservation in mind. It’s also the younger sibling of Max/MSP, with a more bare-bones look, but open-source and with a permissive BSD license.

    Since the advent of libpd, Pure Data has been embedded in many apps where it serves as an audio engine. At the Pd convention, Dan Wilcox, Peter Brinkmann, and Tal Kirshboim presented a look back on six years with libpd. Chris McCormick’s PdDroidParty, Dan Wilcox’ PdParty, and Daniel Iglesia’s MobMuPlat are building on libpd and simplifying the process of running a Pd patch on a mobile device. For the Pd convention, they joined forces and gave a workshop together.

    What was the most exciting part of the Convention? That answer will be different depending on who you ask. For the electronic music producer, it might have been Peter Brinkmann’s presentation of Ableton Link for Pure Data, allowing synchronization and latency compensation with Live.

    Previously on CDM: Free jazz – how to use Ableton Link sync with Pure Data patches

    For the Max/MSP convert, it might be the effort to implement objects from Max as externals for Pd. That library of objects, aptly named cyclone [after Cycling ’74], has been around for a while, but has now seen a major update by Alexandre Porres, Derek Kwan, and Matthew Barber.

    Cyclone: A set of Pure Data objects cloned from Max/MSP [GitHub]

    Hint: not all patches look as messy as this. The insanity of the Ninja Jamm patch.

    Hint: not all patches look as messy as this. The insanity of the Ninja Jamm patch.

    If you’re a musicology nerd, Reiner Kraemer, et al. might have been grabbed you with their analysis tools for (not only) Renaissance music.

    https://github.com/ELVIS-Project/VIS-for-Pd

    Or what about the effort to extend Frank Barknecht’s idea of list-abstractions to arrays by Matthew Barber, or Eric Lyons’ powerful FFTease and LyonPotpourri tools?

    Pd already comes in many flavors. Amongst them are terrific variants, like Pd-L2Ork and its development branch Purr-Data [not a typo], which were presented at the convention by Ivica Bukvic and Jonathan Wilkes. Purr-Data is a glimpse into the possible future of Pd: its interface is rendered as a SVG instead of Tcl/Tk. Ed.: Layman’s terms – you have a UI that’s modern and sleek and flexible, not fugly and rigid like the one you probably know.

    Pd compiles for different processors and platforms. This is getting complex, and it’s important to make sure internal objects and externals are acting the way they were intended to across these variants. IOhannes M zmölnig’s research about “fully automated object testing” takes care of that. With double precision and 64bit builds an essential stepping stone to make sure Pd is staying solid. IOhannes is also the only member of the community who attended all four Pd conventions since Graz in 2004.

    Katja Vetter moderated the open workbench sessions, where “the compilers” discussed development and maintenance of Pd. She also performed as “Instant Decomposer” in an incredible witty, poetic and musically impressive one-woman act.

    Katja always makes an impression with her outfit.

    Katja always makes an impression with her outfit.

    Cyborg Onyx Ashanti.

    Cyborg Onyx Ashanti.

    The concerts in the evening program were a demonstration of the variety and quality of the Pure Data scene, from electroacoustic music, interface based experiments, mobile and laptop orchestras to the night of algo-rave.

    The participants of the 2016 Pure Data Convention. Organizers Jaime Oliver (leftmost), Sofy Yuditskaya (somewhat in the middle) and Ricky Graham (too busy to pose on the picture).

    The participants of the 2016 Pure Data Convention. Organizers Jaime Oliver (leftmost), Sofy Yuditskaya (somewhat in the middle) and Ricky Graham (too busy to pose on the picture).

    With all excellent electronic means to keep a community running, the conventions stay an important way to grow the human connections between its members, and get to things done. We are looking forward to the next gathering in 2018 — this time it might be in Athens, I’ve overheard. Ed.: If anyone wants to join for an interim meeting in 2017, I’m game to use the power of CDM to help make that happen!

    Performances

    Watch some of the unique, experimental performances featured during the conference (many more are online):

    Video archive:
    http://www.nyu-waverlylabs.org/pdcon16/concerts/

    More resources

    Some useful stuff found during our Telegram chat:
    Loads of abstractions and useful things by William Brent

    Ed Kelly’s software and abstractions, including some rather useful tools; Ed developed Ninja Tune iOS/Android remix app Ninja Jamm‘s original Pd patch

    Full program:
    http://www.nyu-waverlylabs.org/pdcon16/program/

    Chat on Telegram about Pd (useful free chat client, Telegram):
    https://telegram.me/puredata

    Places to share patches:
    http://www.pdpatchrepo.info/
    http://patchstorage.com/

    The post New tools for free sound powerhouse Pd make it worth a new look appeared first on CDM Create Digital Music.

    by Peter Kirn at January 03, 2017 11:53 PM

    January 01, 2017

    digital audio hacks – Hackaday

    Circuit Bent CD Player Is Glitch Heaven

    Circuit bending is the art of creatively short circuiting low voltage hardware to create interesting and unexpected results. It’s generally applied to things like Furbys, old Casio keyboards, or early consoles to create audio and video glitches for artistic effect. It’s often practiced with a random approach, but by bringing in a little knowledge, you can get astounding results. [r20029] decided to apply her knowledge of CD players and RAM to create this glitched out Sony Discman.

    Portable CD players face the difficult problem of vibration and shocks causing the laser to skip tracks on the disc, leading to annoying stutters in audio playback. To get around this, better models feature a RAM chip acting as a buffer that allows the player to read ahead. The audio is played from the RAM, giving the laser time to find its track again and refill the buffer when shocks occur. As long as the laser can get back on track fast enough before the buffer runs out, the listener won’t hear any audible disturbances.

    [r20029] soldered wires to the leads of the RAM chip, and broke everything out into banana jacks to create a patch bay for experimenting. By shorting the various leads of the chip, this allows both data and addressing of the RAM to be manipulated. This can lead to audio samples being played back out of sync, samples being mashed up with addresses, and all manner of other weird combinations. This jumbled, disordered playback of damaged samples is what creates the glitchy sounds desired. [r20029] notes that certain connections on the patchbay will cause playback to freeze. Turning the anti-skip feature off and back on will allow playback to resume.

    The write up highlights the basic methodology of the hack if you wish to replicate it – simply find the anti-skip RAM on your own CD player by looking for address lines, and break out the pins to a patch bay yourself. This should be possible on most modern CD players with antiskip functionality; it would be interesting to see it in action on a model that can also play back MP3 files from a data CD.

    Circuit bending is a fun and safe way to get into electronics, and you can learn a lot along the way. Check out our Intro to Circuit Bending to get yourself started.


    Filed under: digital audio hacks, portable audio hacks

    by Lewin Day at January 01, 2017 09:00 PM

    December 31, 2016

    The Penguin Producer

    Blender for the 80s: Outlined Silhouettes

    Having a landscape is nice and all, but what’s the point if there isn’t anything on the landscape?  In this article, we will populate the landscape with black objects containing bright neon silhouettes.   For this tutorial, we’ll place some silhouettes in our composition.  I will assume you’ve read the …

    by Lampros Liontos at December 31, 2016 07:00 AM

    December 29, 2016

    aubio

    Sonic Runway takes aubio to the Playa

    We just learned that aubio was at Burning Man this year, thanks to the amazing work of Rob Jensen and his friends on the Sonic Runway installation.

    Sonic Runway

    Sonic Runway — photo by George Krieger

    Burning Man is an annual gathering that takes place in the middle of a vast desert in Nevada. For its 30th edition, about 70,000 people attended the festival this year.

    Sonic Runway

    Sonic Runway — photo by Jareb Mechaber

    The idea behind Sonic Runway is to visualise the speed of sound by building a 300 meter (1000 feet) long corridor, materialized by 32 gates of colored lights.

    Each of the gates would illuminate at the exact moment the sound, emitted from one end of the runway, reaches them.

    The light patterns were created on the fly, using aubio to analyze the sound in real time and have the LED lights flash in sync with the music.

    To cover the significant cost of hardware, the whole installation was funded by dozens of backers in a successful crowd-funding campaign.

    read more after the break...

    December 29, 2016 01:45 PM

    December 24, 2016

    digital audio hacks – Hackaday

    Lo-Fi Greeting Card Sampler

    We’re all familiar with record-your-own-message greeting cards. Generally they’re little more than a cute gimmick for a friend’s birthday, but [dögenigt] saw that these cards had more potential.

    After sourcing a couple of cheap modules from eBay, the first order of business was to replace the watch batteries with a DC power supply. Following the art of circuit bending, he then set about probing contacts on the board. Looking to control the pitch of the recorded message, [dögenigt] found two pads that when touched, changed the speed of playback. Wiring these two points to the ears of a potentiometer allowed the pitch to be varied continously. Not yet satisfied, [dögenigt] wanted to enable looped playback, and found a pin that went low when the message was finished playing. Wiring this back to the play button allowed the recording to loop continuously.

    [dögenigt] now has a neat little sampler on his hands for less than $10 in parts. To top it off, he housed it all in a sweet 70s intercom enclosure, using the Call button to activate recording, and even made it light sensitive with an LDR.

    We’ve seen a few interesting circuit bends over the years – check out this digitally bent Roland TR-626 or this classic hacked Furby.

    Check out the video under the break.


    Filed under: digital audio hacks, musical hacks

    by Lewin Day at December 24, 2016 09:01 AM

    The Penguin Producer

    Blender for the 80s: The Starry Sky

    When dealing with wireframe landscapes, you usually also see a starry sky, so let’s see if we can add a starfield in Blender.   A Note about Scenes and Layers Before we begin, however, we need to discuss “Scenes” and “Render Layers.” About Scenes A scene is a group of …

    by Lampros Liontos at December 24, 2016 07:00 AM

    December 22, 2016

    ardour

    How (not) to provide useful user feedback, Lesson 123a

    Finding a piece of software not to your liking or not capable of what you need is just fine (expected, almost).

    And then there's this sort of thing.

    Can you make it user friendly? Fucking ridiculous. I use Sonar,plug in my dongle/breakout box,and it just works. One setting change for in and out for the duo or quad capture. No one in the business has anything good to say about Ardour,if they've even heard of it. I'm not trying to be rode. It's a suggestion. Make it user friendly.

    To our friends at Cakewalk: you're welcome.

    by paul at December 22, 2016 11:05 AM

    digital audio hacks – Hackaday

    An Eye-Catching Raspberry Pi Smart Speaker

    [curcuz]’s BoomBeastic mini is a Raspberry Pi based smart connected speaker. But don’t dis it as just another media center kind of project. His blog post is more of a How-To guide on setting up container software, enabling OTA updates and such, and can be a good learning project for some. Besides, the design is quite elegant and nice.

    The hardware is simple. There’s the Raspberry-Pi — he’s got instructions on making it work with the Pi2, Pi2+, Pi3 or the Pi0. Since the Pi’s have limited audio capabilities, he’s using a DAC, the Adafruit I2S 3W Class D Amplifier Breakout for the MAX98357A, to drive the Speaker. The I2S used by that part is Inter-IC Sound — a 3 wire peer to peer audio bus — and not to be confused with I2C. For some basic visual feedback, he’s added an 8×8 LED matrix with I2C interface. A Speaker rounds out the BoM. The enclosure is inspired by the Pimoroni PiBow which is a stack of laser cut MDF sheets. The case design went through four iterations, but the final result looks very polished.

    On the software side, the project uses Mopidy — a Python application that runs in a terminal or in the background on devices that have network connectivity and audio output. Out of the box, it is an MPD and HTTP server. Additional front-ends for controlling Mopidy can be installed from extensions, enabling Spotify, Soundcloud and Google Music support, for example. To allow over-the-air programming, [curcuz] is using resin.io which helps streamline management of devices that are hard to reach physically. The whole thing is containerized using Docker. Additional instructions on setting up all of the software and libraries are posted on his blog post, and the code is hosted on GitHub.

    There’s a couple of “To-Do’s” on his list which would make this even more interesting. Synced audio being one: in a multi-device environment, have the possibility to sync them and reproduce the same audio. The other would be to add an Emoji and Equalizer display mode for the LED matrix. Let [curcuz] know if you have any suggestions.


    Filed under: digital audio hacks, Raspberry Pi

    by Anool Mahidharia at December 22, 2016 12:00 AM

    December 21, 2016

    digital audio hacks – Hackaday

    I Think I Failed. Yes, I Failed.

    Down the rabbit hole you go.

    In my particular case I am testing a new output matching transformer design for an audio preamplifier and using one of my go to driver circuit designs. Very stable, and very reliable. Wack it together and off you go to test and measurement land without a care in the world. This particular transformer is designed to be driven with a  class A amplifier operating at 48 volts in a pro audio setting where you turn the knobs with your pinky in the air sort of thing. Extra points if you can find some sort of long out of production parts to throw in there for audiophile cred, and I want some of that.

    Lets use some cool retro transistors! I merrily go along for hours designing away. Carefully balancing the current of the long tailed pair input. Picking just the right collector power resistor and capacitor value to drive the transformer. Calculating the negative feedback circuit for proper low frequency cutoff and high frequency stability, and into the breadboard the parts go — jumper clips, meter probes, and test leads abound — a truly joyful event.

    All of the voltages check out, frequency response is what you would expect, and a slight tweak to the feedback look brought everything right into happiness. Time to fire up the trusty old HP 334A Distortion Analyzer. Those old machines require you to calibrate the input circuit and the volt meter, tune a filter to the fundamental frequency you are applying to the device under test and step down to lower and lower orders of distortion levels until the meter happily sits somewhere in the middle of a range.

    Most modern circuits in even cheap products just go right down to sub .1% total harmonic distortion levels without even a thought and I expected this to be much the same. The look of horror must have been pronounced on my face when the distortion level of my precious circuit was something more akin to a clock radio! A frantic search began. Was it a bad jumper, or a dirty lead in the breadboard, or an unseated component? Was my function generator in some state of disrepair? Is the Stephen King story Maximum Overdrive coming true and my bench is going to eat me alive? All distinct possibilities in this state of panic.

    After a little break, as the panic and need to find an exact singular problem began to fade I realized something. It was doing exactly what it was supposed to be doing.

    The input part of choice in this case is a mostly forgotten 60’s Hitachi PNP silicon part 2SA565 in a (here comes the audiophile cred as we speak) TO-1 package with the long leads so perfect for point to point assembly. (More on this aspect for another time.) After all, these parts adorned the audio stages of countless Japanese radios and such. A PNP small signal BJT is as good as any right? Also these surplus store caps and resistors  are perfectly good. They all measure out ‘good’ on the meter after all. These jumper leads and meter probes are Pomona. Best you can get. No worries there. And on and on the excuses and rationalizations come.

    By this point no amount of optimism or delusion could really help. The grown up hiding inside my head spoke up and the truth was obvious. How could a pile of old noisy parts and wiring more like spaghetti than a proper electronic device do any better? I am trying to reach orbit with a bottle rocket of my own design. I lost perspective. So eager to test my new widget that I completely neglected to take good scientific practices into account on the faith that previous experience could guide me through lack of proper setup and experimental control. Just the crosstalk on those jumpers and probes could account for this problem, not to mention noisy out of spec old parts.

    It literally could be impossible to ever find all of the possible causes. I built failure in from the start just for the sake of having something that used parts nutcase audiophiles would find more visually appealing. I better go find out where I lost my integrity on this one. Perhaps I set it down with my wallet and keys when I got home from work today. I think I will go clean my bench and layout a PCB with new modern components so I can actually get this test done.

    This is a standard long tail pair input circuit used in most linear audio designs. A very handy thing to be familiar with as it is extremely linear and adaptable. Shown here in its standard audio configuration including high frequency shelving for high frequency stability and low pass filter for DC drift reduction.


    Filed under: digital audio hacks, Featured, Original Art

    by Charles Alexanian at December 21, 2016 06:00 PM

    Libre Music Production - Articles, Tutorials and News

    LSP Plugins anniversary release 1.0.18

    LSP Plugins anniversary release 1.0.18

    Vladimir Sadovnikov has just released version 1.0.18 of his audio plugin suite, LSP plugins. This release celebrates one year since LSP plugins 1.0.0 release. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

    This release includes the following new plugins and changes -

    by Conor at December 21, 2016 12:56 PM

    MOD Devices Blog

    MODx meetup

    Greetings, music lovers!

    Things have been getting exciting at MOD headquarters in Berlin lately - we’ve been hard at work getting MOD Duos into the hands of musicians all around the world, and also hard at work implementing new features, fixes & effects for the musicians who are jamming on their MOD Duos already. In amongst all that we got to meet with some of you at our first MODx user community meetup, right here in Berlin!

    Hosted at Neukölln’s amazing music tech co-working, studio & event space Noize Fabrik and organised/promoted with our friends Musik Hackspace Berlin, we held a free workshop on creative effects processing with over 30 musicians from a range of different backgrounds - from the cellists & guitarists to the MPC beat producers & Max MSP nerds, it seemed like there was a little bit of everything, so some very interesting jam sessions ensued.

    Since we were experimenting with stuff like re-ordering effects & processors, trying the same effects at different points in a signal chain, and sidechaining or otherwise combining different sound sources or using one sound to modulate another, the Duo’s web browser GUI made it super easy to demonstrate the principals we were implementing and hear the changes immediately, and it was great to see all of the different effects setups being dreamed up & implemented by such a varied group of musicians. I have a background in organising & hosting music technology hackathons, and seeing the stuff that happens when a group of artists, musicians & technologists are gathered in a room together with the right technology in their hands never ceases to amaze me, and I had a realisation while surrounded by a group of people creating amazing sounds with the Duo: Much like with the Arduino and Raspberry Pi, the most powerful part of the MOD Duo is not the hardware or the software - it is the community of people who use and develop for it. Open source devices for producing or processing sounds are now in the hands of more musicians than ever before, and by sharing our creations, learning and collaborating with each other we become part of an ecosystem that fosters creativity and equips each of us with more tools to realise our musical ideas.

    Anyone who knows me outside of what I’ve been up to at MOD may know already that I am a huge nerd for Cycling74’s Max/MSP, and one thing we’ve been working on at MOD lately is the ability to compile Gen~ code from inside Max into LV2 plugins that can be used on the Duo! The exciting possibilities offered by this integration encouraged me to develop my own skills, and saw me delving further into DSP design than I’ve ever gone before, resulting in the creation of my first LV2 plugin. I can’t wait to see where this new rabbit-hole goes…

    We’ll be looking at how to make our own plugins from Max MSP Gen~ code in a workshop at our next MODx meetup, when MOD’s in-house Linux audio guru FalkTX will show us the ropes. For anyone with a bit of existing knowledge on Max MSP it’ll be a great way to see how you can expand your existing skillset with the ability to create VST and LV2 plugins without having to learn a whole new development platform, and if you’re a MOD Duo user it’ll equip you with the knowledge you need to take the creative freedom you’ve found in Max and put it under your feet with the Duo. I envisage some mind-bending Max-infused guitar effects featuring at my next gig…

    So stay tuned for more info about the next MODx meetup in Berlin, we would love to see you there! If you can’t make it, never fear - The MOD community is everywhere! Why not come and say hi on our forum or join the MOD Duo user group on Facebook? The conversations we have with our community at events as well as via the forum or social media shape the way we work at MOD. If you have dream features or ideas, requests for future development, or need some help with something you’ve developed yourself then we would love to hear from you or see you at one of our future events.

    Oh and finally, if you’ve not joined us yet, what are you waiting for? This is the stomp box revolution. Musicians of all kinds, empowering ourselves with open source technology to change the way we play and perform music. Buy a MOD Duo at moddevices.com and join the revolution today.

    That’s all from me. Happy holidays from the team at MOD, you’ll hear more from us after Christmas and in the meantime keep making music, keep loving live & keep enjoying your MOD Duo!

    • Adam @ MOD HQ

    December 21, 2016 05:20 AM

    December 20, 2016

    open-source – CDM Create Digital Music

    Spaceship Delay is an insane free plug-in inspired by hardware

    Spaceship Delay is a free modeling plug-in for Mac and Windows with some wild effects. And it’s made possible partly thanks to the openness of hardware from KORG (and us). The plug-in itself you shouldn’t miss, and if you’re interested in how it’s made, there’s a story there, too.

    First, the plug-in — it’s really cool, and really out there, not so much a tame modeling effect as a crazy bundle of extreme sonic possibilities. In fact, it’s as much a multi-effects processor as it is a delay.

    Here it is in action, just quickly applying some of the sounds to a drum loop (and making use of its “German/Canadian” MeeBlip filter model):

    There are tons of extras packed in there, and the unruly quality of it to me is part of the appeal. (I’m planning on making something with this one, absolutely.)

    You get three delay modes: single, ping pong, and dual/stereo, plus:

    • Delay time, time subdivisions, tap tempo, sync
    • Feedback
    • Modulation
    • Attack control for triggering via dynamic
    • Modeled “spring” tape reverb based on the Dynacord Echocord Super 76
    • Bitcrusher
    • Tube preamp
    • Vintage phaser
    • Modeled synth filters from the KORG MS-20 and MeeBlip anode/triode
    • Monotron delay-inspired delay
    • Freeze switch (opening up use as a looper)
    • Loads of presets

    Plus there’s extensive online help to assist you in navigating all these choices. And I totally read it. Really. No, okay, I didn’t, I just played with the knobs. But I did have a look, and it looks nice.

    VST, AU, AAX formats
    32-bit, 64-bit
    macOS, Windows

    There are a lot of possibilities here, from subtle to experimental, useful for pretty much anything from drums to vocals to synth to guitar.

    But the story behind the modeling is also fascinating. Creator Dr. Ivan Cohen has delved deep into the theory of modeling, and has been writing about the process on his blog. It’s definitely of interest to developers, but makes a good read for anyone curious about vintage and new hardware and different designs of filters and the like. (No doctorate in DSP required.)

    anodeangle

    Open designs have long been a part of the history of electronic music technology. In the analog days, it was fairly typical to publish circuit designs. These were ostensibly for purposes of repair, but naturally people read and learned from them and produce modified versions of their own. Then along came digital tech, and much of the creativity of the business disappeared into the black boxes of chips – not only to protect intellectual property, but because of the nature of the chips themselves.

    Now, we’ve come full circle. Researchers discuss design and modeling in academic circles. And fueled by online communities interested in hacking and the open source movement, hardware makers increasingly share their designs. That’s included KORG publishing MS-20 filter circuits and encouraging modifications, and of course our own MeeBlip project.

    What’s cool is that Ivan has used that openness to learn from these designs and try his own implementations, all in a context we never envisioned. So you can apply something inspired by the MeeBlip and Korg filters in a new digital environment.

    The vintage Super 76 inspired the tape delay model - and is reason alone to take a look at this plug-in.

    The vintage Super 76 inspired the tape delay model – and is reason alone to take a look at this plug-in.

    Not all of the modeling is perfect yet. But that’s fun, too, as you get some weird and unexpected effects.

    Here’s his story on the original version:

    https://musicalentropy.github.io/Spaceship-Delay/

    And an in-depth discussion of why he used these filters and what inspired him:

    The filters in Spaceship Delay

    Grab the plug-in here, and consider casting your vote in the KVR Developer Challenge:

    http://www.kvraudio.com/product/spaceship-delay-by-musical-entropy/details

    Totally free as in beer.

    The post Spaceship Delay is an insane free plug-in inspired by hardware appeared first on CDM Create Digital Music.

    by Peter Kirn at December 20, 2016 05:53 PM

    December 19, 2016

    Linux – CDM Create Digital Music

    Here’s how MOTU says they’re improving latency on their new interfaces

    You’d be forgiven for not noticing, but the top audio interfaces are one of the things that have been steadily getting better. That is, the handful of makers really focused on service musicians (and other audio and audiovisual applications) have improved interface quality, added a lot of features and connectivity, and improved driver performance.

    MOTU is one of those makers on a short list that I hear good experiences with. But this fall when a press release crossed my desk saying they had more low latency performance, I wanted a bit more detail than the marketing language was offering. So I spoke to MOTU’s Jim Cooper to clarify a bit.

    I know a lot of MOTU boxes are out there in the wild among our CDM readers, so I’d love to hear from those of you using them. (And I don’t want to just favor one vendor – I’d be happy to repeat this conversation with others, as these are the sort of chats I get to have with manufacturers, and it’s nice to be able to share them.)

    TL/DR version: MOTU will give you lower round-trip latency on their latest boxes.

    Also, some quick notes about what makes the UltraLite mk4 nice:

    • iOS, Linux. It does now do USB class compliant operation, so you can use it with iOS (or even Linux, in fact, even though MOTU don’t mention that).
    • Browser mixing. You can access a 48-channel mixer in your Web browser, meaning this does double duty as a mixer – and your computer becomes the interface.
    • Any input, any output. You can route signals in a customizable router, so any input can go to any output.
    • Quality! MOTU has put in what they say are “super high quality” converters; certainly, my research says you should have some good results

    CDM: Can you go into some detail on the new low latency drivers for the UltraLite?

    Sure! Our new low-latency drivers were years in development. These drivers (and the firmware in the hardware, too) are still actively tweeked and optimized, and we regularly release driver updates to further improve performance.

    Which hardware is supported? I know MOTU has an integrated driver model, so that means you should see these benefits across the line?

    The low latency drivers for the UltraLite-mk4 are for all audio interfaces in our new generation “Pro Audio” family. This covers the latest releases of UltraLite-mk4, the new 624 and 8A interfaces we announced last week, and all MOTU AVB/TSN capable hardware (UltraLite AVB, 1248, 16A, 8M etc.)

    What did you do from a technical standpoint to make this work?

    The short answer is…we started from scratch, spent a lot of time optimizing, looking at profilers, and optimizing some more. We have learned a lot from our 20 years of writing audio drivers and making audio interfaces. Starting from scratch meant that we could fully capitalize on those lessons learned. At the same time, operating systems have improved along with computer hardware. We can now count on machines having multiple cores and supporting Intel-intrinsic (SSE) operations, which helps a lot.

    Okay, this is the one I’m most keen to know: how does performance compare on Windows versus macOS?

    It depends on the machine and the software being used. Let’s assume most people have a decent, healthy computer and that we’re talking about USB.

    For latency performance, we expect both platforms to perform well. Both should be able to do under 3 ms patch-thru or better. That’s like having your head about three feet further from an audio source.

    For CPU performance, it’s mostly negligible on both platforms. The lower your buffer size, the more CPU we use, which has always been the case. In windows this is generally more true, so there will be a minor difference between platforms.

    We want to mention that when connected via Thunderbolt, performance is a little better (for both Mac and Windows). Thunderbolt is also slightly more efficient with regard to CPU usage. But the main point is, with these new drivers, USB holds up remarkably well in comparison to Thunderbolt, given common industry perceptions.

    Yeah, I’m currently spec’ing out PCs with Thunderbolt on. There have been some under-the-hood improvements I know to Windows audio lately. Any you would comment on, or that have implications for your projects?

    Which improvements are you referring to? Since Vista they’ve had the MMCSS API, which gives DAWs a way to prioritize audio threads over most of the system, which really helps. That helps ASIO drivers quite a bit, too. Kernel drivers still have the limitations of poor timer accuracy and DPC scheduling, which make it more difficult to deliver audio buffers. But we have found ways to address those issues and deliver extremely solid performance.

    Ed.: Well, we’re a bit behind, honestly, in tracking Windows changes. I hope to remedy that soon, though if you found Vista annoying and PC hardware options lacking then, some of the changes we reported on long ago made to Vista are now also in an OS that’s friendlier and more mature, and I think PC hardware has improved, too. I know there have been some other efforts on Windows audio that we need to keep up to date. And meanwhile on the Mac side, Sierra has fixed some things, too.

    What should users of the UltraLite mk4 expect in real world usage?

    A generational improvement in both the driver performance and the overall features and performance of the hardware. On today’s absolute fastest computers, we can achieve full, round-trip monitoring with RTL as low as 1.6 ms with a 32 sample buffer setting at 96 kHz. If you’re running a bunch of effects and tracks, then it’s probably a good idea to bump that up a bit. But even on a good machine (like what most of us have), you can easily achieve 3-4 ms RTL under most practical situations these days.

    Thanks, Jim.

    624-front-and-rear

    Okay, so you can add low latency features to the other stuff that’s nice on the UltraLite.

    Meanwhile, MOTU’s 624 and 8A are shipping now. Interestingly, they include both USB3 and Thunderbolt. So if you need a mobile interface to swap between machines and not all of them have Thunderbolt, especially on Windows, you’ve got options. I would note that Thunderbolt is spreading fast on the PC, though.

    The big deal with the 624 and 8A is that you get 32-34 channels of audio I/O, the ESS Sabre32 DACs with 132 dB dynamic range, and networked capabilities via AVB. I’m guessing AVB isn’t so relevant to most CDM readers, but for those of you needing to combine audio across computers and interfaces, it’s hugely powerful.

    And like the other recent interfaces including the UltraLite, you get standalone mixing functionality you can access via any Web browser (even on mobile) on a WiFi network.

    There’s also a suite of analysis tools with FFT, oscilloscopes, and visual analyzers.

    The AVB stuff on the flagship offerings were nice, but I suspect these could be even bigger – well under a grand, and with I/O that fits a lot of needs.

    More:
    http://motu.com/

    The post Here’s how MOTU says they’re improving latency on their new interfaces appeared first on CDM Create Digital Music.

    by Peter Kirn at December 19, 2016 04:17 PM

    digital audio hacks – Hackaday

    Tape-Head Robot Listens to the Floor

    We were just starting to wonder exactly what we’re going to do with our old collection of cassette tapes, and then along comes art robotics to the rescue!

    Russian tech artist [::vtol::] came up with another unique device to make us smile. This time, it’s a small remote-controlled, two-wheeled robot. It could almost be a line follower, but instead of detecting the cassette tapes that criss-cross over the floor, it plays whatever it passes by, using two spring-mounted tape heads. Check it out in action in the video below.

    Some of the tapes are audiobooks by sci-fi author [Stanislaw Lem] (whom we recommend!), while others are just found tapes. Want to find out what’s on them? Just drive.

    We’ve featured [::vtol::]’s work before, which ranges from the conceptual, like this piece that broadcasts poetry in successive BSSIDs from what amounts to a cultured WiFi throwie, to the beautiful, like this visualization of brainwaves using ferrofluid and antifreeze.


    Filed under: digital audio hacks, misc hacks

    by Elliot Williams at December 19, 2016 06:01 AM

    December 18, 2016

    Scores of Beauty

    LilyPond’s Freedom

    Oops, I have to plead guilty for some vanity-Googling my name in combination with “LilyPond”. OK, this is embarassing, but on the bright side it revealed some blog posts I didn’t know yet. And there’s one in particular that I want to recommend today because it’s a post that actually should have appeared here a long time ago (and my mention is actually very minor). Joshua Nichols wrote a very interesting piece on software freedom, which I suggest to read here: https://joshdnichols.com/2015/11/16/why-i-love-lilypond-freedom/

    by Urs Liska at December 18, 2016 01:50 PM

    December 17, 2016

    The Penguin Producer

    Blender for the 80s: Wireframe Mesh

    In this article, we talk about the wireframe mesh used as the ground and mountains of many pieces of science fiction art in the 80s. One of the foundations of 80s art was based in computer graphics.  Computers of the time were not very beefy; it was not possible to …

    by Lampros Liontos at December 17, 2016 07:00 AM

    December 16, 2016

    open-source – CDM Create Digital Music

    FEEDBOXES are autonomous sound toys that play along with you

    We live in an age when we can jam along with machines as well as with humans. And maybe it’s about time that they fed us some clever grooves instead of, you know, fake news and stuff.

    Our friend Krzysztof Cybulski of Warsaw, PL’s panGenerator shares his FEEDBOXES. They’re “autonomous” sound objects, capable of responding to audio inputs with perpetually-transforming responses.

    It’s all thanks to elegant use of feedback loops – meaning you can toy with these techniques yourself.

    Now that’s a better kind of echo chamber.

    It also makes use of the awesome, free PdDroidParty by Chris Mccormick, which in turn is based on the free libpd library and Pure Data.

    It’s not the first time Krzysztof has built instruments around feedback, messing about in the panGenerator workshop for the joy of it. See his feedback synth, too:

    It’s worth checking out all panGenerator are doing; they’re really one of the smartest and most imaginative interaction design shops in the moment, and representative of Poland’s brainpower at its finest.

    https://www.facebook.com/pangenerator/

    The post FEEDBOXES are autonomous sound toys that play along with you appeared first on CDM Create Digital Music.

    by Peter Kirn at December 16, 2016 06:02 PM

    December 11, 2016

    The Penguin Producer

    Discussion: 80s Design

    I am in my 40s.  I had my childhood and adolescence centered in the 1980s, and as a result, I have a liking for the art from that period.  Additionally, the artwork from that period is both visually stimulating and simple at the same time, allowing them to be the …

    by Lampros Liontos at December 11, 2016 04:57 PM

    December 09, 2016

    Linux – CDM Create Digital Music

    Ableton or FL Studio or Bitwig, Maschine Jam integrates with everything

    First, there was software – and mapping it manually to controllers. Then, there was integrated hardware made for specific software – but you practically needed a different device for each tool. Maschine Jam is a third wave: it’s deeply integrated with software workflows, but it can swap from one tool to another without having to change how you work.

    That’s possible because Maschine Jam is focused on some fairly specific workflows as far as triggering patterns, creating melodies and rhythms, and controlling parameters. The “jam” part is really focused on live control. So it’s not quite about deep sample editing and studio production like Ableton Push or Maschine Studio, but it is then adaptable to lots of other contexts.

    In short, even if you keep your beloved Push in the studio, Maschine Jam wants to be the lightweight live gigging controller you toss in your backpack.

    And it doesn’t necessarily force you to choose a particular tool. Even if you never touch Maschine, it’s now a reasonable controller for Ableton Live, FL Studio, and Bitwig Studio in its own right. And significantly, if you do use Maschine, you can now switch between working with Maschine and your DAW of choice, and the control mappings stay the same. (Of course, that may make you decide you want two Jams, but you get the picture.)

    I was already impressed by Maschine Jam’s Ableton Live integration. It’s not a Push, mind – there’s no velocity sensitivity, and you will sometimes miss the availability of displays on the hardware. (That means looking at the computer screen, which is part of what these controllers could free you from.) But it’s also lighter, boasts integrated touch strips for mixing and parameter control, and lots of quick workflow shortcuts that make it really handy playing live. When Gerhard first introduced Push, he talked about it as a way to start tracks. And it remains a powerful hardware window into the production process. But now I find Jam fits the rest of the picture: quick jam sessions and playing live.

    Oh yeah, and there’s the price: US$399 street, which of course includes Maschine and all the Komplete 11 Select features. That’s not a bad deal on the hardware controller alone, and it’s a stupidly good deal once you figure in it gives you entry to all the software.

    But now a new update deepens the integration with Ableton Live, Max for Live, FL Studio, and Bitwig Studio, too, giving you a range of choices on Mac, Windows, and Linux.

    As other controllers attempting to be universal live controllers have faded into the background, Maschine Jam seems to realize the promise. Let’s look at how integration works in each.

    Why Maschine, Why Jam?

    If I had to show just one feature that explains how Jam is a bit different than Launchpad Push APC grid blah blah more grids blah blah….

    Well, it’s this. Maschine’s locking and morphing means that you can experiment with capturing and then transforming different settings. There’s some especially deep possibilities here when you combine it with Reaktor Blocks, synth lovers.

    So before we start controlling other software, let’s have a look at that:

    Ableton Live, Max for Live

    Maschine Jam already works in Ableton Live for clip triggering and (crucially) mixing with fader strips. Clip triggering works exceptionally well, in fact: while NI’s grid lacks velocity sensitivity, the compact pads are ideal for this use case and deliver a responsive ‘snap’ when pressed. Device parameter control is there, too, though you may slightly miss having a screen for knowing which control is which.

    Here’s the basic Ableton integration. It’s very, very similar to what you get with Ableton Push – but now you can swap between working this way in Maschine and working this way in Ableton. And honestly, part of the appeal to me of Jam is that it does less – so there’s a limited set of stuff that you get really quick at.

    (In the very small tweaks department, the update also adds triplet access, finally.)

    Where things get interesting in today’s update is that now you’ve got a dedicated Max for Live template, too. That opens up lots of other clever features – or even locking the Jam to a Max patch whilst another controller does something else.

    Now, I know Ableton may be a bit squeamish about this being an Ableton controller that lacks their branding and collaboration. But as a user of Live since version 1, part of the ongoing appeal to me of this tool is its versatility and the ability to use a variety of hardware in different situations. So I do hope the Abletons warm up to what NI have done here.

    FL Studio

    Intrepid FL Studio users have hacked all sorts of smart ways of playing live over the years. Now, more recent versions of FL are really nicely equipped for live performance.

    And FL is really an ideal match for Jam. It has long had step sequencing as an integrated, native feature, and now combines the level of steps/notes with larger clips and patterns.

    fl_maschinejam

    It’s a really lovely environment. In fact, just … possibly mute the video you’re about to see, because while the music will appeal to someone, it sort of reinforces this idea that FL is just for certain music genres. It’s not. You can do anything you like. And FL’s architecture and efficiency I think are top notch.

    MIDI, Logic

    You can also use the MIDI template included with Maschine Jam to control software. It’s not nearly as deep as the other examples here, but it is interesting. Here’s an example with Apple Logic Pro:

    Bitwig Studio

    I’ve sort of saved the best for last. Bitwig benefit from having a new architecture rather than loads of ancient legacy code. And as a result, the environment hardware makers have for compatibility is really ideal.

    Native Instruments have partnered with Bitwig directly as I understand it in order to deliver a template with deep integration. The basic mold is what you get from Ableton – control Maschine, switch and control Bitwig, get pattern creation and sequencing and mixing and parameter control in each.

    But there are some subtle and important differences here.

    maschinejam_bitwig_big

    Fine fader control. The best one to me is this one – SHIFT gives you fine-adjustment on the touch strips for more precision, as in Jam.

    Note events light up on running patterns.

    Bitwig’s onscreen overlay works. That actually gets a bit confusing in Ableton Live, which lacks Maschine’s heads-up display. Actually, it’d be great if Live had this, for Max patchers and custom controllers.

    Global swing support. Again, as in Jam. That really adds to the hardware/groove feel of the integration, though.

    Switch projects from hardware. You had me at “switch projects.”

    Change drum machines using the built-in Bitwig drum machines when sequencing (via SELECT).

    SHIFT+SOLO to change pattern length.

    And this is definitely the best video, because it comes from Thavius Beck.

    More on this from our friends at AskAudio:


    Bitwig Studio 1.3.15 Adds Comprehensive Support For Maschine JAM

    You’ll want the latest version of Bitwig Studio. This being Bitwig, it’s even ready for Ubuntu.

    Bitwig Downloads

    The post Ableton or FL Studio or Bitwig, Maschine Jam integrates with everything appeared first on CDM Create Digital Music.

    by Peter Kirn at December 09, 2016 06:44 PM

    December 08, 2016

    OSM podcast

    December 07, 2016

    open-source – CDM Create Digital Music

    PushPull is a crazy futuristic squeezebox instrument you can make

    PushPull will blow apart your idea of what a typical controller – or an accordion – might be. It’s a bit like a squeezebox that fell from outer space, coupling bellows with colored lights, sensors, mics, and extra controls. And you can now make one yourself, thanks to copious documentation.

    You may have seen the instrument in action in the last couple of years ago – gasping in the dark.

    PushPull Balgerei 2014 from 3DMIN on Vimeo.

    But with more complete documentation, you get greater insight into how the thing was made – and you could even follow the instructions to make your own.

    Things you expect to see: a bellow, valves, keys.

    Thing you might not expect: RGB LEDs lighting up the instrument, six capacitive touch sensors, six-direction inertial sensing (for motion), microphones, rotary encoders.

    And many of the parts are fabricated via 3D printing. That combines with some more traditional techniques – yes, including cutting, folding, and gluing. It’s all under a permissive Creative Commons attribution license. (That’s a bit scant for open source hardware, actually, in that they might consider some other license, too. But it gets the job done.)

    20150607-img_4633

    pushpull_20160531-img_8894

    20160531-img_8891-till-bovermann

    It’s eminently hackable, too, with X-OSC messages sent wirelessly from its sensors, loads of moddable electronics, and recently even integration with Bela, the lovely low-latency embedded platform.

    The project is the work of Amelie Hinrichsen, Till Bovermann, and Dominik Hildebrand Marques Lopes, who combine overlapping skills in art, product design, soundmaking, music, industrial engineering, and hardware and software engineering. PushPull itself is part of the innovative 3DMIN instrument design project in Berlin, a multi-organization project.

    Check out the instructions for more:

    http://3dmin.github.io/

    The post PushPull is a crazy futuristic squeezebox instrument you can make appeared first on CDM Create Digital Music.

    by Peter Kirn at December 07, 2016 04:22 PM

    MOD Devices Blog

    MOD Duo 1.2 update now available

    Hello again, music lovers! We’ve been having a great time recently at MOD. We hosted our first MODx meetup in Berlin, gathering existing members of the MOD Duo user community as well as the attendees of a Musik Hackspace workshop on the creative application of effects for music production & performance. It was great to see MOD Duos in the hands of so many talented & creative people, who utilised them when testing out the different uses of effects that were discussed during the workshop. We also ended up enjoying some amazing impromptu jams which combined music that spanned many different genres - it was a real treat! We’ll be hosting another MODx meetup in Berlin very soon, so if you want to be notified of future events please join our mailing list at moddevices.com or the MOD Duo User Group on Facebook

    We’re also very pleased to announce the release of software update 1.2 for your Duo. Check out some of the amazing new features we’ve added:

    • Favorites
      There are now so many pedals & plugins available for the Duo that it was starting to take some time to find those favorite ones which you re-use in lots of your amazing pedalboard creations. Not any more! You can now mark any plugin as a favorite and have all of those appear in a single category. Mein Lieblings!

    • Tap Tempo
      You can now assign a control to tap tempo! There are now a bunch of pedals in the Delay, Modulator, Spatial & Generator categories which support the new tap tempo feature, and I’m sure more & more will start to integrate this great feature. Auf Tempo!

    • Zeroconf support
      Zeroconf support (also known as “Bonjour”) means you can now connect to your MOD using http://modduo.local instead of using the IP address. Null-Konfiguration!

    • Custom ranges for MIDI CCs
      Have you ever found that you wanted finer control over a smaller range of one of your pedal parameters when using a MIDI controller? Well, worry no longer! You can now set custom ranges when using the MIDI learn function. Benutzerdefinierte!

    • Several minor web interface changes
      You’ll also notice a few changes to the Duo’s web interface. Glänzend und neu!

    For the changelog and discussion about the update as well as more detailed information on the features mentioned above, please see this post on the MOD Forum The next time you open the MOD web interface you’ll receive an update notification, and the update process is simple to initiate.

    As always, please get in touch if you have any issues, and in the meantime keep making music, keep loving life & keep enjoying your MOD Duo!

    “Alles ist SUPER” - Adam @ MOD HQ

    December 07, 2016 05:20 AM

    December 06, 2016

    Pid Eins

    Avoiding CVE-2016-8655 with systemd

    Avoiding CVE-2016-8655 with systemd

    Just a quick note: on recent versions of systemd it is relatively easy to block the vulnerability described in CVE-2016-8655 for individual services.

    Since systemd release v211 there's an option RestrictAddressFamilies= for service unit files which takes away the right to create sockets of specific address families for processes of the service. In your unit file, add RestrictAddressFamilies=~AF_PACKET to the [Service] section to make AF_PACKET unavailable to it (i.e. a blacklist), which is sufficient to close the attack path. Safer of course is a whitelist of address families whch you can define by dropping the ~ character from the assignment. Here's a trivial example:

    …
    [Service]
    ExecStart=/usr/bin/mydaemon
    RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
    …
    

    This restricts access to socket families, so that the service may access only AF_INET, AF_INET6 or AF_UNIX sockets, which is usually the right, minimal set for most system daemons. (AF_INET is the low-level name for the IPv4 address family, AF_INET6 for the IPv6 address family, and AF_UNIX for local UNIX socket IPC).

    Starting with systemd v232 we added RestrictAddressFamilies= to all of systemd's own unit files, always with the minimal set of socket address families appropriate.

    With the upcoming v233 release we'll provide a second method for blocking this vulnerability. Using RestrictNamespaces= it is possible to limit which types of Linux namespaces a service may get access to. Use RestrictNamespaces=yes to prohibit access to any kind of namespace, or set RestrictNamespaces=net ipc (or similar) to restrict access to a specific set (in this case: network and IPC namespaces). Given that user namespaces have been a major source of security vulnerabilities in the past months it's probably a good idea to block namespaces on all services which don't need them (which is probably most of them).

    Of course, ideally, distributions such as Fedora, as well as upstream developers would turn on the various sandboxing settings systemd provides like these ones by default, since they know best which kind of address families or namespaces a specific daemon needs.

    by Lennart Poettering at December 06, 2016 11:00 PM

    December 04, 2016

    The Penguin Producer

    Where’s Walldo?

    A lot of what makes professional-looking video is not in the editing; it’s in the recording.  And there are several techniques to shooting video that can help you get more professional-looking footage to put in your project. To make things clearer, let’s start with a video describing the basic shots …

    by Lampros Liontos at December 04, 2016 01:57 AM

    December 01, 2016

    ardour

    Ardour 5.5 released

    Ardour 5.5 is now available, with a variety of new features and many notable and not-so-notable fixes. Among the notable new features are support for VST 2.4 plugins on OS X, the ability to have MIDI input follow MIDI track selection, support for Steinberg CC121, Avid Artist & Artist Mix Control surfaces, "fanning out" of instrument outputs to new tracks/busses and the often requested ability to do horizontal zoom via vertical dragging on the rulers. There are also the usual always-ongoing improvements to scripting and OSC support.

    As in the past, some features including OSX VST support, Instrument Fanout, and Avid Artist support were made possible by sponsorship from Harrison Consoles.

    Download  

    Read more below ...

    read more

    by paul at December 01, 2016 11:43 AM

    November 30, 2016

    GStreamer News

    GStreamer 1.10.2 stable release (binaries)

    Pre-built binary images of the 1.10.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

    See /releases/1.10/ for the full list of changes.

    The builds are available for download from: Android, iOS, Mac OS X and Windows.

    November 30, 2016 05:45 PM

    November 29, 2016

    GStreamer News

    GStreamer 1.10.2 stable release

    The GStreamer team is pleased to announce the second bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

    This release only contains bugfixes and it should be safe to update from 1.10.0. For a full list of bugfixes see Bugzilla.

    See /releases/1.10/ for the full release notes.

    Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

    Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

    November 29, 2016 02:00 PM

    Revamped documentation and gstreamer.com switch-off

    The GStreamer project is pleased to announce its new revamped documentation featuring a new design, a new navigation bar, search functionality, source code syntax highlighting as well as new tutorials and documentation about how to use GStreamer on Android, iOS, macOS and Windows.

    It now contains the former gstreamer.com SDK tutorials which have kindly been made available by Fluendo & Collabora under a Creative Commons license. The tutorials have been reviewed and updated for GStreamer 1.x. The old gstreamer.com site will be shut down with redirects pointing to the updated tutorials and the official GStreamer website.

    Thanks to everyone who helped make this happen.

    This is just the beginning. Our goal is to provide a more cohesive documentation experience for our users going forward. To that end, we have converted most of our documentation into markdown format. This should hopefully make it easier for developers and contributors to create new documentation, and to maintain the existing one. There is a lot more work to do, do get in touch if you want to help out. The documentation is maintained in the new gst-docs module.

    If you encounter any problems or spot any omissions or outdated content in the new documentation, please file a bug in bugzilla to let us know.

    November 29, 2016 12:00 PM

    PipeManMusic

    Stay In Bed For Christmas

    So I've recorded a little Christmas tune for those who are over the hype. I hope you like it. Check it out, share it, buy it, I'd really appreciate it.




    by Daniel Worth (noreply@blogger.com) at November 29, 2016 07:05 AM

    November 28, 2016

    open-source – CDM Create Digital Music

    A call for emotion in musical inventions, at Berlin hacklab

    Moving beyond stale means of framing questions about musical interface or technological invention, we’ve got a serious case of the feels.

    For this year’s installment of the MusicMakers Hacklab we host with CTM Festival in Berlin, we look to the role of emotion in music and performance. And that means we’re calling on not just coders or engineers, not just musicians, and performers, but psychologists and neuroscientists and more, too.

    The MusicMakers Hacklab I was lucky enough to found has now been running with multiple hosts and multiple countries, bringing together artists and makers of all stripes to experiment with new performances. The format is this: get everyone together in a room, and insist on people devising new ideas and working collaboratively. Then, over the course of a week, turn those ideas into performances and put those performances in front of an audience.

    pkirn_hacklab-005

    This year talks and performances we hope will tackle this issue of emotion in some new ways, the embodiment of feeling and mind in the work. It comes hot on the heels of working in Mexico City with arts collective Interspecifics and MUTEK Festival in collaboration with CTM. (Leslie García has been instrumental in collaborating and bringing the event to Mexico.)

    The open call to come to Berlin is available for submissions through late Wednesday. If you can make it at the beginning of February, you can soak up all CTM Festival has to offer and make something new.

    The theme:

    Now that our sense of self is intertwined with technology, what can we say about our relationship with those objects beyond the rational? The phrase “expression” is commonly associated with musical technology, but what is being expressed, and how? In the 2017 Hacklab, participants will explore the irrational and non-rational, the sense of mind as more than simply computer, delving into the deeper frontiers of our own human wetware.

    Building on 2016’s venture into the rituals of music technology, we will encourage social and interpersonal dynamics of our musical creations. We invite new ideas about how musical performance and interaction evoke feelings, and how they might realize emotional needs.

    I’m really eager to share how we bring music psychology and cognition into the discussion, too, so stay tuned.

    And I think that’s part of the point. Skills with code and wires are great, but they’re just part of the picture. Everything you can bring in performance technique, in making stuff, in ideas – this is all part of the technology of music, too. We have to keep pushing beyond our own comfortable skills, keep drawing connections between media, if we want to move forward.

    Berlin native Byrke Lou joins us and brings her own background in performance and inter-disciplinary community, which makes me still more excited.

    Full description and application form link:

    MusicMakers Hacklab:
    Emotional Invention. In collaboration with CDM, Native Instruments and the SHAPE Platform.

    The post A call for emotion in musical inventions, at Berlin hacklab appeared first on CDM Create Digital Music.

    by Peter Kirn at November 28, 2016 08:05 PM