planet.linuxaudio.org

August 19, 2017

open-source – CDM Create Digital Music

Here are some of our favorite MeeBlip triode synth jams

We say “play” music for a reason – synths are meant to be fun. So here are our favorite live jams from the MeeBlip community, with our triode synth.

And, of course, whether you’re a beginner or more advanced, this can give you some inspiration for how to set up a live rig – or give you some idea of what triode sounds like if you don’t know already. We picked just a few of our favorites, but if we missed you, let us know! (audio or video welcome!)

First, Olivier Ozoux has churned out some amazing jam sessions with the triode, from unboxing to studio. (He also disassembled our fully-assembled unit to show the innards.)

The amazing Gustavo Bravetti is always full of virtuosity playing live; here, that distinctive triode sound cuts through a table full of gear. Details:

Again ARTURIA’s Beat Step Pro in charge of randomness (accessory percussions and subtle TB303). Practically all sounds generated on the black boxes, thanks Elektron, and at last but no least MeeBlip’s [triode] as supporting melody synth. Advanced controls from Push and Launch Control using Performer , made with Max by Cycling ’74.

Here’s a triode with the Elektron Octatrack as sequencer, plus a Moog Minitaur and Elektron Analog RYTM. That user also walks through the wavetable sounds packed into the triode for extra sonic variety.

Novation’s Circuit and MeeBlip triode pair for an incredible, low power, low cost, ultra-portable, all-in-one rig. We get not one but two examples of that combo, thanks to Pete Mitchell Music and Ken Shorley. It’s like peanut butter and chocolate:

One nice thing about triode is, that sub oscillator can fatten up and round out the one oscillator of a 303. We teamed up with Roland’s Nick de Friez when the lovely little TB-03 came out to show how these two can work together. Just output the distinctive 303-style sequencer to triode’s MIDI in, and have some fun:

Here’s triode as the heart of a rig with KORG’s volca series (percussion) and Roland’s TB-03 (acid bass) – adding some extra bottom. Thank you, Steven Archer, for your hopeful machines:

Get yours:
http://meeblip.com

The post Here are some of our favorite MeeBlip triode synth jams appeared first on CDM Create Digital Music.

by Peter Kirn at August 19, 2017 02:09 PM

August 16, 2017

ardour

Ardour 5.11 released

We are pleased to announce the availability of Ardour 5.11. Like 5.10, this is primarily a bug-fix release, though it also includes VCA automation graphical editing, a new template management dialog and various other useful new features.

Download  

Read more below for the full list of features, improvements and fixes.

read more

by paul at August 16, 2017 06:32 PM

digital audio hacks – Hackaday

The Best Stereo Valve Amp In The World

There are few greater follies in the world of electronics than that of an electronic engineering student who has just discovered the world of hi-fi audio. I was once that electronic engineering student and here follows a tale of one of my follies. One that incidentally taught me a lot about my craft, and I am thankful to say at least did not cost me much money.

Construction more suited to 1962 than 1992.

It must have been some time in the winter of 1991/92, and being immersed in student radio and sound-and-light I was party to an intense hi-fi arms race among the similarly afflicted. Some of my friends had rich parents or jobs on the side and could thus afford shiny amplifiers and the like, but I had neither of those and an elderly Mini to support. My only option therefore was to get creative and build my own. And since the ultimate object of audio desire a quarter century ago was a valve (tube) amp, that was what I decided to tackle.

Nowadays, building a valve amp is a surprisingly straightforward process, as there are many online suppliers who will sell you a kit of parts from the other side of the world. Transformer manufacturers produce readily available products for your HT supply and your audio output matching, so to a certain extent your choice of amp is simply a case of picking your preferred circuit and assembling it. Back then however the world of electronics had extricated itself from the world of valves a couple of decades earlier, so getting your hands on the components was something of a challenge. I cut out the power supply by using a scrap Dymar Electronics instrument enclosure which had built-in HT and heater rails ready to go, but the choice of transformers and high-voltage capacitors was something of a challenge.

Pulling the amplifier out of storage in 2017, I’m going in blind. I remember roughly what I did, but the details have been obscured by decades of other concerns. So in an odd meeting with my barely-adult self, it’s time to take a look at what I made. Where did I get it right, and just how badly did I get it wrong?

Lovingly hand-drawn from life, missing the PSU components.

The amp itself sits in the removable portion of the Dymar chassis, I can’t remember what the dead instrument was, but Dymar produced a range of instruments as modules for a backplane. The front panel is a piece of sheet steel I cut myself, and is still painted in British Leyland Champagne Beige, the colour of that elderly Mini. It has a volume control, a DIN input socket which must have seemed cool to only me in 1992, and a Post Office Telephones terminal block for the speakers. Inside the chassis the amp is mounted on a piece of aluminium sheet, on top a pair of PCL86 triode/pentode valves, a pair of output transformers and a supply smoothing capacitor, and underneath all the smaller components on tag strips. Though I say it myself, it’s a tidier job than I remember.

1969’s hot new device, already obsolete by 1980.

The circuit is simple enough, a single-ended Class A audio amplifier that I lifted along with the PCL86 and the original output transformers, from a commonly available (at the time) scrap ITT TV set. These triode/pentodes were the integrated amplifier device of their day, as ubiquitous as an LM386 in later decades, containing a triode as preamplifier and a power output pentode, and capable of delivering a few watts of audio at reasonable quality with very few external components. They were also dirt cheap, the “P” signifying a 300mA series heater chain as used in TV sets that was considerably less desirable than the “E” versions which had the standard 6.3V heaters. Not a problem for me, as the Dymar PSU had a 12V rail that could happily give almost the 300mA each to a couple of PCL86s.

My choice of parts must have been limited to those my university’s RS trade counter had in stock that had the required working voltage, and are a mixed bag that you wouldn’t remotely class as audio grade. There are a couple of enormous 450V 33μF electrolytics, and 250VAC Class Y 0.1μF polymer capacitors intended for use in power supply filters. I seem to have followed the idea of using a small and a large capacitor in parallel, probably for some youthful hi-fi mumbo-jumbo idea about frequency response. Otherwise the resistors look like carbon film components, something that probably made more sense to me in the early 1990s than it does now.

On top of the chassis, the original transformers taken from scrap TV sets turned out to be of such low quality that they tended to “sing” at any kind of volume, so I shelled out on a pair of the only valve audio output transformers I could find at the time, something that must have been a relic of a bygone era in the RS catalogue. The original valves were a pair of PCL86s from old TVs, but I replaced them with a “matched” pair of brand new PCL86s. I remember these cost me 50p (about 90¢ in ’92) each at a radio rally, and were made in Yugoslavia with a date code of January 1980. The new valves didn’t make any difference, but they made me feel better.

How did this amplifier perform, and what did I learn from it?

Under the hood, and it’s all a bit messy.

In the first instance, it performed 110%, because I had a valve amp and nobody else did. The air of mystique surrounding this rarest of audio devices neatly sidestepped the fact that it wasn’t the best of valve amps, but that didn’t matter. Being a class A amplifier with new components, it came to the party with the lowest theoretical distortion it could have had due to its circuit topology. Another area of shameless bragging rights for my younger self, but in reality all it meant was that it got hot.

The sound at first power-on was crisp and sibilant, but with an obvious frequency response problem, it was bass-to-mid heavy, and not in a good way. Here was my first learning opportunity, I had just received an object lesson in real audio transformers not behaving like theoretical audio transformers. It had an impressive impulse response though, square waves came through it beautifully square on my battered old ‘scope.

I could only go so far listening to a hi-fi that might have been a little fi but certainly wasn’t hi. My attention turned to that frequency response problem, and since we’d just been through the series of lectures that dealt with negative feedback I considered myself an expert in such matters who could fix it with ease. I cured the frequency response hump with a feedback resistor from output to input, playing around with values until I lit upon 330K as about right.

The Best Stereo Valve Amp In The World. Yeah, right.

Here was my second learning experience. I’d made a pretty reasonable amplifier as it happens, and it sounded rather good through my junk-shop Wharfedale Linton speakers with cheap Maplin bass drivers. I could indulge my then-held taste in tedious rock music, and pretend that I’d reached a state of hi-fi Higher Being. But of course, I hadn’t. I’d got my flat frequency response, but I’d shot my phase response to hell, and thus my impulse response had all the timing of a British Rail local stopping service. The ‘scope showed square waves would eventually get there, but oh boy did they take their time. The sound had an indefinable wooliness to it, it was clear as a bell but the sibilance had gone. I came away knowing more about the complex and unexpected effects of audio circuitry than I ever expected to, and with an amp that still had some bragging  rights, but not as the audio genius I had hoped I might be.

The amplifier saw me through my days as a student, and into my first couple of years in the wider world. Eventually the capacitor failed in the Dymar PSU, and I bought a Cambridge Audio amp that has served me ever since. The valve amp has sat forlornly on the shelf, a reminder of a past glory that maybe one day I’ll resuscitate. Perhaps I’ll give it a DSP board programmed to cure its faults. Fortunately I have other projects from my student days that have better stood the test of time.

So. There’s my youthful folly, and what I learned from it. How about you, are there any projects from your past that seemed a much better idea at the time than they do now?


Filed under: classic hacks, digital audio hacks, Hackaday Columns, Interest, Original Art

by Jenny List at August 16, 2017 05:01 PM

August 12, 2017

Libre Music Production - Articles, Tutorials and News

FLOSS music convention in Germany in November

On the 4th and 5th of November 2017 you can attend the Sonoj Convention in Cologne, Germany. Admission is free. You will be able to enjoy demonstrations, talks and workshops about music production through open source software. Hands-on tutorials and workflow presentations can be expected. The Sonoj Convention is a great opportunity to meet like-minded people, maybe even to have engaging discussions! Every man and woman is welcome, no matter your musical or technological background.

by admin at August 12, 2017 08:25 PM

August 11, 2017

digital audio hacks – Hackaday

We Should Stop Here, It’s Bat Country!

[Roland Meertens] has a bat detector, or rather, he has a device that can record ultrasound – the type of sound that bats use to echolocate. What he wants is a bat detector. When he discovered bats living behind his house, he set to work creating a program that would use his recorder to detect when bats were around.

[Roland]’s workflow consists of breaking up a recording from his backyard into one second clips, loading them in to a Python program and running some machine learning code to determine whether the clip is a recording of a bat or not and using this to determine the number of bats flying around. He uses several Python libraries to do this including Tensorflow and LibROSA.

The Python code breaks each one second clip into twenty-two parts. For each part, he determines the max, min, mean, standard deviation, and max-min of the sample – if multiple parts of the signal have certain features (such as a high standard deviation), then the software has detected a bat call. Armed with this, [Roland] turned his head to the machine learning so that he could offload the work of detecting the bats. Again, he turned to Python and the Keras library.

With a 95% success rate, [Roland] now has a bat detector! One that works pretty well, too. For more on detecting bats and machine learning, check out the bat detector in this list of ultrasonic projects and check out this IDE for working with Tensorflow and machine learning.


Filed under: digital audio hacks

by Rich Hawkes at August 11, 2017 05:00 AM

August 09, 2017

Pid Eins

All Systems Go! 2017 Speakers

The All Systems Go! 2017 Headline Speakers Announced!

Don't forget to send in your submissions to the All Systems Go! 2017 CfP! Proposals are accepted until September 3rd!

A couple of headline speakers have been announced now:

  • Alban Crequy (Kinvolk)
  • Brian "Redbeard" Harrington (CoreOS)
  • Gianluca Borello (Sysdig)
  • Jon Boulle (NStack/CoreOS)
  • Martin Pitt (Debian)
  • Thomas Graf (covalent.io/Cilium)
  • Vincent Batts (Red Hat/OCI)
  • (and yours truly)

These folks will also review your submissions as part of the papers committee!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

by Lennart Poettering at August 09, 2017 10:00 PM

August 03, 2017

open-source – CDM Create Digital Music

Export to hardware, virtual pedals – this could be the future of effects

If your computer and a stompbox had a love child, MOD Duo would be it – a virtual effects environment that can load anything. And now, it does Max/MSP, too.

MOD Devices’ MOD Duo began its life as a Kickstarter campaign. The idea – turn computer software into a robust piece of hardware – wasn’t itself so new. Past dedicated audio computer efforts have come and gone. But it is genuinely possible in this industry to succeed where others have failed, by getting your timing right, and executing better. And the MOD Duo is starting to look like it does just that.

What the MOD Duo gives you is essentially a virtualized pedalboard where you can add effects at will. Set up the effects you want on your computer screen (in a Web browser), and even add new ones by shopping for sounds in a store. But then, get the reliability and physical form factor of hardware, by uploading them to the MOD Duo hardware. You can add additional footswitches and pedals if you want additional control.

Watch how that works:

For end users, it can stop there. But DIYers can go deeper with this as an open box. Under the hood, it’s running LV2 plug-ins, an open, Linux-centered plug-in format. If you’re a developer, you can create your own effects. If you like tinkering with hardware, you can build your own controllers, using an Arduino shield they made especially for the job.

And then, this week, the folks at Cycling ’74 take us on a special tour of integration with Max/MSP. It represents something many software patchers have dreamed of for a long time. In short, you can “export” your patches to the hardware, and run them standalone without your computer.

This says a lot about the future, beyond just the MOD Duo. The technology that allows Max/MSP to support the MOD Duo is gen~ code, a more platform-agnostic, portable core inside Max. This hints at a future when Max runs in all sorts of places – not just mobile, but other hardware, too. And that future was of interest both to Cycling ’74 and the CEO of Ableton, as revealed in our interview with the two of them.

Even broader than that, though, this could be a way of looking at what electronic music looks like after the computer. A lot of people assume that ditching laptops means going backwards. And sure enough, there has been a renewed interest in instruments and interfaces that recall tech from the 70s and 80s. That’s great, but – it doesn’t have to stop there.

The truth is, form factors and physical interactions that worked well on dedicated hardware may start to have more of the openness, flexibility, intelligence, and broad sonic canvas that computers did. It means, basically, it’s not that you’re ditching your computer for a modular, a stompbox, or a keyboard. It’s that those things start to act more like your computer.

Anyway, why wait for that to happen? Here’s one way it can happen now.

Darwin Grosse has a great walk-through of the MOD Duo and how it works, followed by how to get started with

The MOD Duo Ecosystem (an introduction to the MOD Duo)

Content You Need: The MOD Duo Package (into how to work with Max)

An alternative: the very affordable OWL Pedal is similar in function, minus that slick browser interface. It can load Max gen~ code, too:

https://hoxtonowl.com/

New Tutorials including Max MSP on the OWL!

Pd users, that works, too – via Heavy (I think on the MOD, as well):

OWL & Heavy – a Pd patch on the OWL

The post Export to hardware, virtual pedals – this could be the future of effects appeared first on CDM Create Digital Music.

by Peter Kirn at August 03, 2017 01:07 PM

August 02, 2017

Libre Music Production - Articles, Tutorials and News

MOD Duo and Max/MSP integration

MOD Duo and Max/MSP integration

Max/MSP users can now easily convert their Gen objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!

by yassinphilip at August 02, 2017 10:21 AM

August 01, 2017

MOD Devices Blog

NEW! MOD Duo and Max/MSP integration!

Max/MSP users can now easily convert their gen~ objects into LV2 plugins, add them to the roster of MOD Duo plugins and bring them to the stage!

 

More power to performing digital musicians

There’s no shortage of signal processing environments available to musicians who want to manipulate digital audio. Their use has spread to homes, studios and even stages everywhere. We’ve all seen this revolution take place, with computers popping up at concerts and the advent of laptop music performance. But a computer is not an instrument and a musician shouldn’t become a mere button pusher or mouse handler.

That’s where we come in. The MOD Duo is a computing platform for performing musicians, it’s a computer in a box, optimised to process audio during live performances. And since our creative platform is based on an open format, it can be useful to scores of artists and developers.

The Max/MSP software is one of the greatest and most powerful tools in this field and it has become one of the most used visual programming languages for music and multimedia since its inception in the 1980s. For months now, we’ve been collaborating with Cycling’74, the developers and maintainers of Max/MSP, in order to provide a new stage experience for their users and encourage developers to port their patches and objects into the MOD Duo plugin store.

We’ve come up with a Max package that takes the code exported from Max/MSP gen~ objects and takes care of compiling an LV2 plugin and putting it into the Duo. The whole idea is to simplify the process of having Max/MSP patches and turning them into plugins that can be used on stage without the burden of the computer and with the added controllability provided by the Duo.

“Wait a minute… I’m confused. What is Gen?”

If you’re not familiar with Max or have never heard of Gen, here’s an overview, courtesy of our friends over at Cycling’74:

“Gen is a new approach to the relationship between patchers and code. The patcher is the traditional Max environment – a graphical interface for linking bits of functionality together. With embedded scripting such as the js object, text-based coding became an important part of working with Max as it was no longer confined to simply writing Max externals in C. Scripting however still didn’t alter the logic of the Max patcher in any fundamental way because the boundary between patcher and code was still the object box. Gen represents a fundamental change in that relationship. The Gen patcher is a new kind of Max patcher where Gen technology is accessed.”

If you are an aficionado and were just waiting for this kind of solution to appear, we’ve come up with documentation to make the process of getting your Gen-based plugins to the MOD Duo as effortless as possible, with a wiki entry and a tutorial that shows you how to create your own plugins.

Max MOD Devices MOD Duo package

You can export you gen~ code straight from Max, with the new MODDuo Package

Why is it cool to have this integration?

This is no small feat.

We’re significantly speeding up the learning curve for adding personalized plugins to the Duo and also allowing digital musicians to take their Max/MSP objects to the stage without a computer. These new plugins will be fully compatible with the 200+ ones that are already available, allowing the creation of elaborate audio chains.

Right now, after being added to the users’ machines, these new plugins can be posted to the forum and we will publish them manually on the plugin store (we’re working on automating this process). Soon, when our commercial plugin store is setup and ready to go, Max/MSP wizards (and all of the MOD community) will be able to provide their creations for a fee, creating a new business in the process, but also promoting the development of more sophisticated audio apps by programmers. Until the commercial store arrives, demo version of these plugins can be published anyway.

In the future, we’ll keep adding new integrations and documentation for other languages and protocols such as Pure Data, Faust and OSC. Creating plugins for the Duo will be within everyone’s reach.

What are the current plugins that come from Max/MSP gen~ objects?

It all started a while ago, with the official gen plugin export project that Cycling’74 created for building audio applications and plugins. Our software developer, the legendary falkTX, then started doing an implementation of that focused on LV2 and Linux, which he inserted in his own open-source project that provides Cross-Platform Audio Plugins called DISTRHO.

At that time, he and our intern Nino de Wit began to run some tests and develop plugins from gen~code. From this effort, the initial project was born. Shortly afterwards, Nino began developing his own, more complex plugins. As Cycling’74 became aware of this, they contacted us and we decided to make a seamless integration between both platforms.

Here are the plugins derived from Max/MSP gen~ objects, conceived during Nino’s internship at MOD HQ in Berlin. These little gems have been making many MOD users happy since they came around. Here’s a glimpse at the type of plugin this integration will enable users to create:

Plugin screenshot

 

 

Shiroverb

Shiroverb is a shimmer-reverb based on the “Gigaverb”-genpatch, ported from the implementation by Juhana Sadeharju, and the “Pitch-Shift”-genpatch, both in Max MSP.

 

 

 

Plugin screenshot

 

 

 

Modulay

Modulay is an analog-style delay with variable types of modulation based on the setting of the morph control. All the way counterclockwise is chorus, 12 o’clock is vibrato, and all the way clockwise is flanger. With every setting in between morphing from one effect to the other.

 

 

Plugin screenshot

 

 

 

Larynx

Larynx is a simple sine-modulated vibrato with a tone control.

 

 

 

Plugin screenshot

 

 

 

 

Harmless

Harmless is a wave-shapeable harmonic tremolo with a stereo phase control.

 

 

 

 

 

Pedalboards section!

You can check out and listen to the Shiro plugins in action in these sweet pedalboards that our community has created and shared (and load them into your Duo at the click of a button):

Swell Boost:

Swell Boost Pedalboard

Everything a multi-layered guitarist needs in their arsenal: a succulent and smooth shimmer swell pad on one path, and a shrieking shrill boost on another path that cuts through the mix like a Japanese ginsu knife! The best part is that there’s a 4-way toggle switch at the start of the chain that allows the source signal to constantly flow, but the 1st and 4th switches toggle either the pad or the boost (or BOTH if you want the NUCLEAR option!). This allows the guitarist to cut the pad or boost, but results in the pad’s trails to remain in the mix as the source signal never changes.

 

ShimmerMachine:

Shimmer machine pedalboard shared

 Using the Harmless plugin combined with the Larynx on a Novation Circuit.

 

Harmless JCM:

Guitarix JCM-800 and the Shiro Harmless modulator. Such a beautiful sound! Add a little looper and you’re good to go 🙂

 

Soap Bubbles:

Soap bubble oleiade pedalboard Psychedelic sound based on Larynx, Chorus and some Panning.

 

Modulay Madness:

Modulay Gianfranco Pedalboard

Enjoy the Modulay in a simple guitar setup.

 

201b Pad Shiroverb:

Steve Lawson Pedalboard ShiroverbHuge pad sound with a parallel path for melody, played on a bass.

 

KalimbaJammSessionMOD

Kalimba Jam Session Startup Garden Wallifornia Music Tech Pedalboard

Pedalboard used during the Startup Garden at Wallifornia Music Tech. We wanted to show visitors that you can also use the MOD Duo with acoustic instruments and created this nice pad using a synth, a sequencer and a kalimba with a pickup for some solo play. Listen to that tremolo!

 

Makeshift Pitchshift:

Pitchshift pedalboard Steve LawsonUsing the ‘shimmer’ in the Shiroverb as a pitch-shifter on the bass.

 

We want to know if you are as thrilled as we are with this integration. Do you look forward to creating your own plugins from Max/MSP? Are you excited about the commercial store? Share your thoughts in the comments below!

 

PS: Special Offer

If you buy a MOD Duo before September 30th 2017, you get Max7 for 9 months COMPLETELY FREE.

If you are already a MOD user, you can get Max7 for 9 months for free as well by completing the Great Book of Pedalboards form.

by Mauricio Dwek at August 01, 2017 08:49 AM

July 31, 2017

open-source – CDM Create Digital Music

The Viktor NV-1 is a powerful synth running in your browser

Its name is Viktor, and it’s a synth you can play with for free in a browser – with a mouse, or finger, or keyboard, or even MIDI.

Not news, but – heck of a lot of fun to play with.

Now part of a growing number of Web Audio (and even Web MIDI) synths, the Viktor NV-1 is a surprisingly powerful diversion. You get three oscillators, two envelopes (one for amplitude, one for filter), a filter, LFO, reverb, delay, compressor, and loads of controls.

Because it lives in a browser, it’s also easy to save and share presets with others. So, for instance, here you go:

https://goo.gl/ugqbkT

The developer also has a lovely explanation of how this works:

It’s Built on-top of the Web Audio API (WAA). The WAA is very nicely organized and easy to use. Basically it provides a variety of NodeTypes (responsible for sound generation, editing or analysis) which you combine in your liking, creating a graph through which your sound is being shaped.

Also worth noting – how it was built:

Web Audio API, Web MIDI API, Local Storage (through npm module “store”). For the effects section I used Tuna.js.

AngularJS, webaudio-controls (I am regretting this decision, since these controls are full of bugs and had to fix several of them before releasing), Bootstrap, Font Awesome, Font Orbitron and Stylus is what I used for the UI.

Instead of using Angular alone, for dependency management, I use Browserify, which provides the nice CommonJS format/style of module creation and requiring.

Angular isn’t very Browserify-friendly so I had to do some stitching in my initial setup (browserify-shim, browserify-ng-html2js etc.) but once the setup was ready development really felt a breeze.

Grunt and multiple grunt-contrib-‘s are used for the build (and development rebuild).

I drew the images on Pixelmator.

Try it:

http://nicroto.github.io/viktor/

Or grab the code (fully open source):

https://github.com/nicroto/viktor

The browser synth is the work of Nikolay Tsenkov.

The post The Viktor NV-1 is a powerful synth running in your browser appeared first on CDM Create Digital Music.

by Peter Kirn at July 31, 2017 11:47 PM

blog4

new exclusive Notstandskomitee track released

The new Notstandskomitee track Ungetuem can be found exclusive on this compilation by Silent Method Records, currently as download but soon also on vinyl and cassette.
https://silentmethodrecords.bandcamp.com/album/z-e-n-va-compilation

by herrsteiner (noreply@blogger.com) at July 31, 2017 02:29 PM

July 29, 2017

digital audio hacks – Hackaday

Bessel Filter Design

Once you fall deep enough into the rabbit hole of any project, specific information starts getting harder and harder to find. At some point, trusting experts becomes necessary, even if that information is hard to find, obtuse, or incomplete. [turingbirds] was having this problem with Bessel filters, namely that all of the information about them was scattered around the web and in textbooks. For anyone else who is having trouble with these particular filters, or simply wants to learn more about them, [turingbirds] has put together a guide with all of the information he has about them.

For those who don’t design audio circuits full-time, a Bessel filter is a linear, passive bandpass filter that preserves waveshapes of signals that are within the range of the filter’s pass bands, rather than distorting them in some way. [turingbirds]’s guide goes into the foundations of where the filter coefficients come from, instead of blindly using lookup tables like he had been doing.

For anyone else who uses these filters often, this design guide looks to be a helpful tool. Of course, if you’re new to the world of electronic filters there’s no reason to be afraid of them. You can even get started with everyone’s favorite: an Arduino.


Filed under: digital audio hacks

by Bryan Cockfield at July 29, 2017 05:00 PM

July 26, 2017

MOD Devices Blog

Top 5 Greatest Things About Our Time at Wallifornia Music Tech

We were at Wallifornia Music Tech during Les Ardentes festival in Liège and it was a memorable week. Here’s a short account of our adventures.

 

Greetings MOD Community,

 

There’s a lot going on and the next weeks will be full of unveilings, but we had to take some time to share with you some of the brilliant moments we had earlier this month at Wallifornia Music Tech, during the Les Ardentes music festival in Liège, Belgium.

These are the 5 greatest things that happened during the Startup Acceleration Program, the Wallifornia Music Tech hackathon and the Startup Program, and some of the concerts we attended.

 

5 – Spending Time in the Lovely Liège

 

I had been to Liège once, a couple of years ago, and spent the whole time at the university for a conference. The weather was not good and I didn’t get to see much of the city. This time, however, the weather was surprisingly warm and we went out to see some of the sites and enjoy what the town has to offer. We stayed at a quaint little place at Rue Pierreuse, in an artsy neighbourhood on top of a hill.

In general, Belgian people were just incredibly friendly and thoughtful, making sure that we had everything we needed at all times and always proud to show us the hidden gems in their city. In this sense, a special acknowledgement must go to the team from Leansquare – Alice, Clémentine, Gérôme, Roald and Ben, in particular – who were responsible for the excellent organisation of the Startup Acceleration Program. They have an amazing co-working space in the heart of the city and took care of every little detail like a well-tuned machine and with a constant smile.

wallifornia music tech family startup acceleration

Everyone is happy in Belgium.

Also, Les Ardentes music festival in itself was a spectacular event, in a wonderful location by the river, with an awesome lineup mixing nostalgic headliners, up-and-coming favourites and fresh new acts (more on that later!). The logistics and infrastructure were super well handled for such a big festivity and we managed to enjoy some nice concerts along the way.

 

4 – Seeing Some Sweet Hackathon Action

We were partners and sponsors of the hackathon during the Wallifornia Music Tech Living Lab and provided some MOD Duos and our API for the hackers to use in their projects. The hackathon was masterfully organised and conducted by Luann Williams and Travis Laurendine, who are, among other things, the people responsible for the SXSW hackathon.

They did a great job motivating the teams and guaranteeing a smooth sail for the tens of hard-at-work and exhausted hackers.

Travis Laurendine Luann Williams Hackathon Wallifornia Music Tech Les Ardentes

Travis and Luann counting the jury’s votes for best hack.

During this hackathon, we met two amazing lords of bits, bytes and bobs, Tom Brückner and Jean-Michel Dewez, who decided to include a little bit of MOD in their hacks. Tom made a web app that provided information on a given song based on Musimap‘s artificial intelligence API. He used data from our pedalboard feed API in order to propose the corresponding pedalboard and ended up as second runner-up.

Jean-Michel, aka Chantal Goret, an 8-bit virtuoso, wanted to use the Duo with Beatmotor, his crafted MIDI controller and instrument. It was built using an old cigar box, an Arduino board, some knobs, buttons and an ultrasound sensor. He used a teensy board to send MIDI notes to the Duo. For this superb retro hack, he won first prize!

Jean-Michel Dewez Chantal Goret Mauricio Dwek Hackathon Wallifornia Music Tech

Chatting about 8-bit music hacks and the Duo with Jean-Michel, winner of the hackathon, with his cigar-case 8-bit MIDI instrument/controller.

 

3 – Sharing an Intense Week With Eight Fantastic Startups

We spent the whole week with an outstanding group of startuppers from all over the world. There was so much creativity flowing in these intensive training sessions that we all came out fueled with ideas and benefitted from our shared experiences.

I’ll try to summarise all their projects because you should definitely keep an eye out for these gals and guys:

  • Beatlinks: It’s a whole living Musiverse in a game that teaches DJ skills to kids  and an animation.
  • Big Boy Systems: The first recording system that unites binaural sound and a 3D camera in order to create the ultimate immersive experience.
  • Paperchain: They provide data services for the music industry, from the collection and organisation of rights information to the identification of unclaimed royalties.
  • Roadie: An app that uses an AI to help bands with tour schedules, based on data from streaming services and social media.
  • Sofasession: They have developed an app for online music collaboration and another that connects music students with music schools.
  • Soundbops: A toy that teaches the fundamentals of music theory to young children. Their Kickstarter is coming out soon – stay tuned!
  • Warm: A huge real-time radio monitor that allows musicians to find out where their songs are being played.
  • WIP Music: The so-called Tinder for Music. An app that connects musicians to their audience and the venues that can host them.

 

2 – Meeting Trombone Shorty and His Band Backstage

Thanks to our new friend Travis Laurendine, aka Roi Lion d’Orléans, aka Ideas Gardener, we went backstage to meet Trombone Shorty and his band after their concert.

First, a brief word about their performance. It’s been awhile since I’ve seen such energy on stage and there were several mind-blowing moments when I sort of lost it. The whole band is an example of groove, joy and technique.

We met them all: guitarist Pete Murano, drummer Joey Peebles, bass player Mike Bass-Bailey, tenor sax BK Jackson, baritone sax Dan Oestreicher and the man himself, Troy “Trombone Shorty” Andrews. Travis had sent them a video of the previous day at the hackathon with a short demo of the device and they had gone looking on the website.

Suffice to say, they wanted a Duo. Dan even knew about the MOD Duo from before and is now preparing some demos for us. He plays baritone sax but also has a one-man band so we’re very excited.

Trombone Shorty Dan Oestreicher BK Jackson Mike Bass-Bailey Pete Murano Joey Peebles Les Ardentes Backstage MOD Duo

Magical moment: Dan Oestreicher (baritone sax, centre left) and Mike Bass-Bailey (bass. centre right) from Trombone Shorty’s band after the concert with their new companion Duo and footswitch extension.

 

1 – Winning the Startup Acceleration Program

We spent the whole week learning and gathering input from a tremendous team of coaches and experts. We were expected to hone our pitches and enthral a jury of investors and affluent music business advisors.

We all worked very hard to perfect our presentations and find a way to squeeze every last bit of information in under 7-minutes. Gianfranco was selected as the first speaker and gave it all.

You can see his pitch for the MOD Duo below:

He was asked questions from industry experts such as Rishi Patel, Virgine Berger and Ted Cohen, and later sat down to meet with them and other investors.

In the end, we were honoured to take home the title of best startup of the Accelerate & Invest program, which coronated a great week and we hope is a presage of even greater things to come.

Ted Cohen Virginier Berger Gianfranco Ceccolini Mauricio Dwek Prize Startup Acceleration Wallifornia Music Tech

Receiving a sizable check from Armonia CEO Virginie Berger and music industry legend Ted Cohen

 

Honourable mentions: the Ramen at the restaurant next to Leansquare and the Boulets avec Frites, the jam sessions we held with our Dutch acolytes Pjotr Lasschuit and Jesse Verhage at our booth during the Startup Garden using kalimbas, Novation Circuits, synths and a wide assortment of controllers, meeting Belgian geniuses Hermutt Lobby, La Femme’s retro-punk concert…

 

by Mauricio Dwek at July 26, 2017 09:33 PM

July 25, 2017

digital audio hacks – Hackaday

Designing the Atom Smasher Guitar Pedal

[Alex Lynham] has been creating digital guitar pedals for a while and after releasing the Atom Smasher, a glitchy lo-fi digital delay pedal, he had people start asking him how he designed digital effects pedals rather than analog effects. In fact, he had enough interest, that he wrote an article on it.

The article starts with some background on [Alex], the pedals he’s built and why he chose not to work on pedals full-time. Eventually, the article gets to the how [Alex] designed the Atom Smasher. He starts by describing the chip he used, the same one that many hobbyists, as well as commercial builders, use for delay based effects – the SpinSemi FV-1.

The FV-1 is a SMD chip used for digital delays and other effects that require a delay line – reverbs, choruses, flangers, etc. It’s programmed with an assembly-style language called SpinASM. [Alex] goes over some of the tools and references he used when designing for the pedal. He also has a list of tips for would-be effect pedal designers which work whether you’re designing digital or analogue effects.

[Alex] ends his article saying that, in the future, he might make the schematic and code available, but for the moment he’s not. The FV-1 is an interesting chip, and [Alex]’s article gives a nice high-level look at its features and how to develop for it. For some interesting guitar pedal related articles, check out this one using effects pedals to get better audio in your car, and here’s one about playing with DSP and designing a pedal with it.


Filed under: digital audio hacks, musical hacks

by Rich Hawkes at July 25, 2017 05:00 AM

July 24, 2017

Libre Music Production - Articles, Tutorials and News

LMP Asks #23: An interview with Jacek from ZARAZA

LMP Asks #23: An interview with Jacek from ZARAZA

This time we talk to Jacek from ZARAZA, one of the two members of this experimental/industrial doom/death/sludge metal band.

Where do you live and what do you do for a living?

I (Jacek) currently live in Ecuador, after immigrating here from Canada about 1.5 years ago. Originally I am Polish, immigrated to Canada in 1990 when I was 20.

by admin at July 24, 2017 09:55 PM

July 21, 2017

Linux – CDM Create Digital Music

Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned.

Aphex Twin’s “Vordhosbn” just got a surprising video reveal, showing how the track was made. So let’s revisit trackers and 90s underground music culture.

You’re probably familiar with the term “white label,” but where did that term originate?”? Back in the early days of DJing, DJs were very territorial about their crate digging. Sometimes, in order to avoid rival DJs looking at their decks to ID their selections (this is way before the days of Shazam, remember), DJs would rip off the labels of a particularly rare record, leaving the white label residue with no identifying information.

Similarly, the 90s were an interesting time for music production. With the advent of computer sequencers, music became more complex – and in the wild west days before YouTube tutorials, concert phone vids, and everyone using Ableton Live, there was legitimate mystery behind how some of the most complex electronic music was made. Max? SuperCollider? Some homebrew software unavailable to the plebs?

If mystery in electronic music production was a game in the 90s, then Richard D. James was its undisputed winner. As Aphex Twin and a host of other pseudonyms, he created mind-bending sequences. As an interview subject, he was equal parts prankster and cagey. Sure, there was an idea of what the IDM greats were up to – Autechre and Plaid used Max, Squarepusher used Reaktor, Aphex used…something? The mystery has always been part of James’ appeal – here is a man who has claimed to sleep only four hours a night, or to have built or heavily modified all of his hardware, or to be sitting on hundreds if not thousands of unreleased tracks, among other tall tales.

Around 2014, something flipped with Richard D. James. After releasing Syro, his first album in 13 years as Aphex Twin, he unleashed the floodgates with a massive hard drive dump onto SoundCloud – seems he wasn’t lying about all those tracks after all. Following up with this, today you can see the debut of a custom Bleep store for Aphex Twin, including loads of unreleased bonus tracks to go with his albums.

Of most interest to the nerds, however, has got to be this seemingly innocuous video, in which we get a trollingly-effected screencast video of Drukqs track “Vordhosbn”, playing out in the vintage tracker PlayerPro. James had previously identified PlayerPro as his main environment for making Drukqs – now we have video of it in action:

So, there we have it. A classic Aphex Twin track with the curtain drawn up. What can we learn from this video? A few things:

  • PlayerPro’s tracks were all monophonic, so the chords in “Vordhosbn” had to be made using multiple tracks
  • As expected with a tracker, it’s largely built from samples – likely from James’ substantial hardware collection
  • Hey, those oscilloscopes and spectral displays are fun

Perhaps what’s best about this video is that it shows an Aphex classic for what it is – a track, composed in much the same way as any other electronic musician might do it. It doesn’t detract from the special qualities of Aphex’s music, but it does show us what was really going on behind all the mystery – music-making.

Keep Track of It

It’s worth spending a moment to celebrate trackers. Long before the days of piano rolls, trackers were the best way to make intricate sequences using a computer. YouTube is riddled with classic jungle tracks from the mid-90s using software like OctaMed:

For a dedicated community, trackers are still the way to go. And there’s no better tracker around now than Renoise – whose developers have done a fantastic job bringing the tracker workflow into the 21st century. Check out this video of Venetian Snares’ “Vache” done in Renoise:

Like most trackers, Renoise has something of a steep learning curve to get all the key commands right; once you’re there, however, you’ll find it to be a very nimble environment for wild micro-edits and crazy sequences. There’s definitely a reason why it remains a tool of choice for breakcore producers!

Do you use a tracker? What do you think of the workflow? What’s the best way for someone to get started with a tracker? Let us know in the comments!

Ed.: PlayerPro is available as free software for Mac, Windows, Linux … and yes, even FreeBSD.

https://sourceforge.net/projects/playerpro/

Returning CDM contributor David Abravanel is a marketer, musician, and technologist living in New York. He loves that shiny digital crunch. Follow him at http://dhla.me

The post Aphex Twin gave us a peek inside a 90s classic. Here’s what we learned. appeared first on CDM Create Digital Music.

by David Abravanel at July 21, 2017 04:25 PM

July 17, 2017

Pid Eins

casync Video

Video of my casync Presentation @ kinvolk

The great folks at kinvolk have uploaded a video of my casync presentation at their offices last week.

The slides are available as well.

Enjoy!

by Lennart Poettering at July 17, 2017 10:00 PM

July 16, 2017

GStreamer News

Orc 0.4.27 bug-fix release

The GStreamer team is pleased to announce another maintenance bug-fix release of liborc, the Optimized Inner Loop Runtime Compiler. Main changes since the previous release:

  • sse: preserve non volatile sse registers, needed for MSVC
  • x86: don't hard-code register size to zero in orc_x86_emit_*() functions
  • Fix incorrect asm generation on 64-bit Windows when building with MSVC
  • Support build using the Meson build system

Direct tarball download: orc-0.4.27.

July 16, 2017 05:00 PM

July 15, 2017

GStreamer News

GStreamer 1.12.2 stable release (binaries)

Pre-built binary images of the 1.12.2 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

July 15, 2017 10:20 AM

July 14, 2017

open-source – CDM Create Digital Music

Here’s how to download your own music from SoundCloud, just in case

SoundCloud’s financial turmoil has prompted users to consider, what would happen if the service were switched off? Would you lose some of your own music?

Frankly, we all should have been thinking about that sooner. Clarification: To be very clear: there is no reason you should ever have a file that you care about in just one location, no matter how secure and reliable you imagine that location may be. Key files are best kept in at least one online backup and in at least one locally accessible location (so you can get at it even without a fast connection).

There’s also no reason at this point to think SoundCloud is going to disconnect without warning – or indeed any indication from SoundCloud executives, publicly or privately, that they expect the service is going away. While recent staff cuts were painful for the whole organization, both those who remained and those who left, every suggestion is that the service is going to continue.

SoundCloud publicly has said as much. (Though, sorry – SoundCloud, you really shouldn’t be surprised. Vague messaging, no solid numbers on revenue, and a tendency not to go on record and talk to the press have made apocalyptic leaks the main picture people get of the company. In a week when you cut nearly half your staff and have limited explanation of what your plan is, then yeah, you wind up having to use the Twitter airhorn because people will panic.)

But the question of what’s happening to SoundCloud is immaterial. If you’ve got content that’s on SoundCloud and nowhere else, you’re crazy. This is really more like a wake up call: always, always have redundancy redundancy..

The reality is, with any cloud service, you’re trusting someone else with your data, and your ability to get at that data is dependent on a single login. You might well be the failure point, if you lock yourself out of your own account or if someone else compromises it.

There’s almost never a scenario, then, where it makes sense to have something you care about in just one place, no matter how secure that place is. Redundancy neatly saves you from having to plan for every contingency.

Okay, so … yeah, if you are then nervous about some music you care about being on SoundCloud and aren’t sure if it’s in fact backed up someplace else, you really should go grab it.

Here’s one open source tool (hosted on GitHub, too) that downloads music.
http://downloader.soundcloud.ruud.ninja/

A more generalized tool, for downloading from any site that has links with downloads:
http://jdownloader.org/

(DownThemAll, the Firefox add-on, also springs to mind.)

Two services offering similar features are hoping they can attract SoundCloud users by helping them migrate their accounts automatically. (I don’t know what the audio fidelity of that copy is, if it includes the original file; I have to test this – and test whether these offerings really boast a significant competitive advantage.)
https://www.orfium.com/
http://hearthis.at

Could someone create a public mirror of the service? Yes, though – it wouldn’t be cheap. Jason Scott (of Internet Archive fame) tweets that it could cost up to $2 million, based on the amount of data:

(Anybody want to call Martin Shkreli? No?)

My hope is that SoundCloud does survive independently. Any acquisition would likewise be crazy not to maintain users and content; that’s the whole unique value proposition of the service, and there’s still nothing else quite like it. (The fact that there’s nothing quite like it, though, may give you pause on a number of levels.)

My guess is that the number of CDM readers and creators is far from enough to overload a service built to stream to millions of users, so I feel reasonably safe endorsing this use. That said, of course, SoundClouders also read CDM, so they might choose to limit or slow API access. Let’s see.

My advice, though: do grab the stuff you hold dear. Put it on an easily accessible drive. And make sure the media folders on that drive also have an automated backup – I really like cloud backup services like Crashdrive and Backblaze (or, if you have a server, your own scripts). But the best backup plan is one that you set and forget, one you only have to think about when you need it, and one that will be there in that instance.

Let us know if you find a better workflow here.

Thanks to Tom Whitwell of Music thing for raising this and for the above open source tip.

I expect … this may generate some comments. Shoot.

The post Here’s how to download your own music from SoundCloud, just in case appeared first on CDM Create Digital Music.

by Peter Kirn at July 14, 2017 04:06 PM

GStreamer News

GStreamer 1.12.2 stable release

The GStreamer team is pleased to announce the second bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

July 14, 2017 11:00 AM

July 11, 2017

MOD Devices Blog

MOD Duo 1.4 Update Now Available

Dearest community,

 

After several weeks of testing, our latest software update is available!

This one took a bit longer since the testing period largely involved the Beta testing of our first peripheral, the footswitch extension (soon to receive its official name – stay tuned!), and also of the Arduino shield.

As usual, you can upgrade your MOD Duo by clicking on the update icon on the bottom right-hand corner, then on ‘Download’ and finally ‘Upgrade Now’. Wait for a few minutes while the MOD updates itself automatically and enjoy your added features.

Here’s the rundown of release 1.4:

 

Control Chain

Control Chain is MOD’s custom way of doing external devices. It is an open standard (including hardware, communication protocol, cables and connectors). You can do with Control Chain what the MOD Duo’s hardware actuators are doing right now.

Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colours to indicate a specific state, etc.

And remember: you can daisy chain up to 4 Control Chain peripherals to your MOD Duo!

You can read more about Control Chain here.

Usability Changes

Some small but very handy usability changes were made, following user requests. These include:

  • It’s now possible to MIDI learn using pitchbend
  • You can change parameter ranges without having to re-learn a MIDI CC
  • You can delete the initial/first pedalboard preset (to better organise your “scenes”)
  • And we’ve also reduced the CPU usage with control-output intensive plugins.

Web Interface

  • Plugins now have an information icon on top of them on the builder, that shows their info when clicked (they hide when the screen is too small)

Looper Plugin Info Button GUI Pedalboard

  • The Duo’s own actuators now have the “MOD:” prefix to differentiate them from those of Control Chain devices
  • You can now always close addressing and pedalboard presets dialogues with the “ESC” key, independent of focus

There’s also quite a few more changes and tweaks. Visit our changelog on the wiki to see all changes since v1.3.2.

 

That’s it! The next upgrade is already being tested, lots of cool new features on the horizon…

Remember: many of these tweaks and new features were added because of your comments on our forum. So, keep making sweet music with your MOD Duos and let us know of any issues or improvements you’d desire!

by Mauricio Dwek at July 11, 2017 05:25 PM

July 10, 2017

GStreamer News

GStreamer Conference 2017 - Call for Papers

This is a formal call for papers (talks) for the GStreamer Conference 2017, which will take place on 21-22 October 2017 in Prague (Czech Republic), just before the Embedded Linux Conference Europe (ELCE).

The GStreamer Conference is a conference for developers, community members, decision-makers, industry partners, and anyone else interested in the GStreamer multimedia framework and open source multimedia.

The call for papers is now open and talk proposals can be submitted.

You can find more details about the conference on the GStreamer Conference 2017 web page.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan on having another session with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk.

The deadline for talk submissions is Sunday 13 August 2017, 23:59 UTC.

We hope to see you in Prague!

July 10, 2017 02:00 PM

July 05, 2017

blog4

new Notstandskomitee music video

First official video for the new album The Golden Times by Notstandskomitee, made for the track Exhaust. Listen to the album at
https://notstandskomitee.bandcamp.com

by herrsteiner (noreply@blogger.com) at July 05, 2017 03:34 PM

July 04, 2017

fundamental code

Linux & Multi-Screen Touch Screen Setups

While working on the Zyn-Fusion UI I ended up getting a touch screen to help with the testing process. After getting the screen, buying several incorrect HDMI cables, and setting up the screen I found out that the touch events weren’t working as expected. In fact they were often showing up on the wrong screen. If I disabled my primary monitor and only used the touch screen, then events were spot on, so this was only a multi-monitor setup issue.

So, what caused the problem and how can it be fixed?

Well, by default the mouse/touch events which were emitted by the new screen were scaled to the total available area treating multiple screens as a single larger screen. Fortunately X11 provides one solution through xinput. Just running the xinput tool lists out a collection of devices which provide mouse and keyboard events to X11.

mark@cvar:~$ xinput
| Virtual core pointer                          id=2    [master pointer  (3)]
|   > Virtual core XTEST pointer                id=4    [slave  pointer  (2)]
|   > PixArt USB Optical Mouse                  id=8    [slave  pointer  (2)]
|   > ILITEK Multi-Touch-V3004                  id=11   [slave  pointer  (2)]
| Virtual core keyboard                         id=3    [master keyboard (2)]
    > Virtual core XTEST keyboard               id=5    [slave  keyboard (3)]
    > Power Button                              id=6    [slave  keyboard (3)]
    > Power Button                              id=7    [slave  keyboard (3)]
    > AT Translated Set 2 keyboard              id=9    [slave  keyboard (3)]
    > Speakup                                   id=10   [slave  keyboard (3)]

In this case the monitor is device 11 which has it’s own set of properties.

mark@cvar:~$ xinput list-props 11
Device 'ILITEK Multi-Touch-V3004':
        Device Enabled (152):   1
        Coordinate Transformation Matrix (154): 1.000000, 0.000000, 0.000000, 0.000000, 1.000000, 0.000000, 0.000000, 0.000000, 1.000000
        Device Accel Profile (282):     0
        Device Accel Constant Deceleration (283):       1.000000
        Device Accel Adaptive Deceleration (284):       1.000000
        Device Accel Velocity Scaling (285):    10.000000
        Device Product ID (272):        8746, 136
        Device Node (273):      "/dev/input/event13"
        Evdev Axis Inversion (286):     0, 0
        Evdev Axis Calibration (287):   <no items>
        Evdev Axes Swap (288):  0
        Axis Labels (289):      "Abs MT Position X" (689), "Abs MT Position Y" (690), "None" (0), "None" (0)
        Button Labels (290):    "Button Unknown" (275), "Button Unknown" (275), "Button Unknown" (275), "Button Wheel Up" (158), "Button Wheel Down" (159)
        Evdev Scrolling Distance (291): 0, 0, 0
        Evdev Middle Button Emulation (292):    0
        Evdev Middle Button Timeout (293):      50
        Evdev Third Button Emulation (294):     0
        Evdev Third Button Emulation Timeout (295):     1000
        Evdev Third Button Emulation Button (296):      3
        Evdev Third Button Emulation Threshold (297):   20
        Evdev Wheel Emulation (298):    0
        Evdev Wheel Emulation Axes (299):       0, 0, 4, 5
        Evdev Wheel Emulation Inertia (300):    10
        Evdev Wheel Emulation Timeout (301):    200
        Evdev Wheel Emulation Button (302):     4
        Evdev Drag Lock Buttons (303):  0

Notably xinput provides a property to describe a coordinate transformation which can be used to remap the x and y values of the cursor events. The transformation matrix here is a 3x3 matrix used to transform 2D coordinates and is a fairly common sight in computer graphics. It translates from \((x,y)\) to \((x',y')\) as defined by:

$$ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} a & b & c\\ d & e & f\\ h & i & j \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$

The transformation matrix allows for stretching, shearing, translation, flipping, scaling, etc. For the sorts of problems you may see introduced by a multi-monitor setup I would only expect people to care about translating (\(t\)) the events and then re-scaling (\(s\)) them to the offset area. Using these two parameters, the transformation matrix equation is simplified to:

$$ \begin{bmatrix} x' \\ y' \\ 1 \end{bmatrix} = \begin{bmatrix} s_x & 0 & t_x\\ 0 & s_y & s_y\\ 0 & 0 & 1 \end{bmatrix} \times \begin{bmatrix} x \\ y \\ 1 \end{bmatrix} $$

Or without the matrix representation:

$$ \begin{aligned} x' &= s_x x + t_x\\ y' &= s_y y + t_y \end{aligned} $$

With that background out of the way, let’s see how this applied to my specific monitor setup:

2017 monitors

As I mentioned earlier the touch events were scaled to the dimensions of the larger virtual screen. Since the touch screen is larger this means the y axis is mapped correctly and the x axis is mapped for pixels 0..3200 (both screens) instead of pixels 1281..3200 (left screen only). Since the xinput scales theses parameters based upon the total screen size, we can divide by the total x size (3200) to learn that the x axis maps to 0..1 rather than 0.4..1.0. Solving the above equations we can remap the touch events using \(s_x=0.6\) and \(t_x=0.4\). This results in the transformation matrix:

$$ \begin{bmatrix} 0.6 & 0 & 0.4\\ 0 & 1 & 0\\ 0 & 0 & 1 \end{bmatrix} $$

The last step is to provide the new transformation matrix to xinput:

xinput set-prop 11 'Coordinate Transformation Matrix' 0.6 0 0.4 0 1 0 0 0 1

Now cursor events map onto the correct screen accurately and the code to change the xinput properties can be easily put into a shell script.

July 04, 2017 04:00 AM

June 30, 2017

rncbc.org

Qtractor 0.8.3 - The Stickiest Tauon is out!

Howdy!

Qtractor 0.8.3 (stickiest tauon) is out!

Changes for this mostly just a bug-fix beta release::

  • Make sure any just recorded clip filename is not reused while over the same track and session. (CRITICAL)
  • LV2 Plug-in worker/schedule interface ring-buffer sizes have been increased to 4KB.
  • Fixed track-name auto-incremental numbering suffix when modifying any other track property.
  • WSOLA vs. (lib)Rubberband time-stretching options are now individualized on a per audio clip basis.
  • Long overdue, some brand new and fundamental icons revamp.
  • Fixed a tempo-map node add/update/remove rescaling with regard to clip-lengths and automation/curve undo/redo.
  • Fixed a potential Activate automation/curve index clash, or aliasing, for any plug-ins that change upstream their parameter count or index order, on sessions saved with the old plug-in versions and vice-versa.

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun, always.

Flattr this

by rncbc at June 30, 2017 07:00 PM

June 28, 2017

blog4

TMS concert in Hamburg 1.7.2017

after Saturdays blast of a noise night at XB Liebig, getting ready for the next gig at Primal Uproar in Hamburg, where TMS will perform on Saturday 1.7.2017
https://www.tixforgigs.com/site/Pages/Shop/ShowEvent.aspx?ID=18672

we put a recording of the XB Liebig concert on Mixcloud:

by herrsteiner (noreply@blogger.com) at June 28, 2017 03:13 PM

June 27, 2017

Pid Eins

mkosi — A Tool for Generating OS Images

Introducing mkosi

After blogging about casync I realized I never blogged about the mkosi tool that combines nicely with it. mkosi has been around for a while already, and its time to make it a bit better known. mkosi stands for Make Operating System Image, and is a tool for precisely that: generating an OS tree or image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite well known and popular. But mkosi has a number of features that I think make it interesting for a variety of use-cases that other tools don't cover that well.

What is mkosi?

What are those use-cases, and what does mkosi precisely set apart? mkosi is definitely a tool with a focus on developer's needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then "production-ready", and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed with casync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on today's technology, not yesteryear's. Specifically this means that we'll generate GPT partition tables, not MBR/DOS ones. When you tell mkosi to generate a bootable image for you, it will make it bootable on EFI, not on legacy BIOS. The GPT images generated follow specifications such as the Discoverable Partitions Specification, so that /etc/fstab can remain unpopulated and tools such as systemd-nspawn can automatically dissect the image and boot from them.

So, let's have a look on the specific images it can generate:

  1. Raw GPT disk image, with ext4 as root
  2. Raw GPT disk image, with btrfs as root
  3. Raw GPT disk image, with a read-only squashfs as root
  4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
  5. A btrfs subvolume on disk, similar to the plain directory
  6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional options are available:

  1. A swap partition may be added in
  2. The system may be made bootable on EFI systems
  3. Separate partitions for /home and /srv may be added in
  4. The root, /home and /srv partitions may be optionally encrypted with LUKS
  5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
  6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build images based on the following Linux distributions:

  1. Fedora
  2. Debian
  3. Ubuntu
  4. ArchLinux
  5. openSUSE

Note though that not all distributions are supported at the same feature level currently. Also, as mkosi is based on dnf --installroot, debootstrap, pacstrap and zypper, and those packages are not packaged universally on all distributions, you might not be able to build images for all those distributions on arbitrary host distributions.

The GPT images are put together in a way that they aren't just compatible with UEFI systems, but also with VM and container managers (that is, at least the smart ones, i.e. VM managers that know UEFI, and container managers that grok GPT disk images) to a large degree. In fact, the idea is that you can use mkosi to build a single GPT image that may be used to:

  1. Boot on bare-metal boxes
  2. Boot in a VM
  3. Boot in a systemd-nspawn container
  4. Directly run a systemd service off, using systemd's RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used if available to ensure the image is not tampered with (yes, you read that right, systemd-nspawn and systemd's RootImage= setting automatically do dm-verity these days if the image has it.)

Mode of Operation

The simplest usage of mkosi is by simply invoking it without parameters (as root):

# mkosi

Without any configuration this will create a GPT disk image for you, will call it image.raw and drop it in the current directory. The distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is put together, i.e. select package sets, select the distribution, size partitions and so on. Most of that you can actually specify on the command line, but it is recommended to instead create a couple of mkosi.$SOMETHING files and directories in some directory. Then, simply change to that directory and run mkosi without any further arguments. The tool will then look in the current working directory for these files and directories and make use of them (similar to how make looks for a Makefile…). Every single file/directory is optional, but if they exist they are honored. Here's a list of the files/directories mkosi currently looks for:

  1. mkosi.default — This is the main configuration file, here you can configure what kind of image you want, which distribution, which packages and so on.

  2. mkosi.extra/ — If this directory exists, then mkosi will copy everything inside it into the images built. You can place arbitrary directory hierarchies in here, and they'll be copied over whatever is already in the image, after it was put together by the distribution's package manager. This is the best way to drop additional static files into the image, or override distribution-supplied ones.

  3. mkosi.build — This executable file is supposed to be a build script. When it exists, mkosi will build two images, one after the other in the mode already mentioned above: the first version is the build image, and may include various build-time dependencies such as a compiler or development headers. The build script is also copied into it, and then run inside it. The script should then build whatever shall be built and place the result in $DESTDIR (don't worry, popular build tools such as Automake or Meson all honor $DESTDIR anyway, so there's not much to do here explicitly). It may also run a test suite, or anything else you like. After the script finished, the build image is removed again, and a second image (the final image) is built. This time, no development packages are included, and the build script is not copied into the image again — however, the build artifacts from the first run (i.e. those placed in $DESTDIR) are copied into the image.

  4. mkosi.postinst — If this executable script exists, it is invoked inside the image (inside a systemd-nspawn invocation) and can adjust the image as it likes at a very late point in the image preparation. If mkosi.build exists, i.e. the dual-phased development build process used, then this script will be invoked twice: once inside the build image and once inside the final image. The first parameter passed to the script clarifies which phase it is run in.

  5. mkosi.nspawn — If this file exists, it should contain a container configuration file for systemd-nspawn (see systemd.nspawn(5) for details), which shall be shipped along with the final image and shall be included in the check-sum calculations (see below).

  6. mkosi.cache/ — If this directory exists, it is used as package cache directory for the builds. This directory is effectively bind mounted into the image at build time, in order to speed up building images. The package installers of the various distributions will place their package files here, so that subsequent runs can reuse them.

  7. mkosi.passphrase — If this file exists, it should contain a pass-phrase to use for the LUKS encryption (if that's enabled for the image built). This file should not be readable to other users.

  8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an X.509 key pair to use for signing the kernel and initrd for UEFI SecureBoot, if that's enabled.

How to use it

So, let's come back to our most trivial example, without any of the mkosi.$SOMETHING files around:

# mkosi

As mentioned, this will create a build file image.raw in the current directory. How do we use it? Of course, we could dd it onto some USB stick and boot it on a bare-metal device. However, it's much simpler to first run it in a container for testing:

# systemd-nspawn -bi image.raw

And there you go: the image should boot up, and just work for you.

Now, let's make things more interesting. Let's still not use any of the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw

This is similar as the above, but we made three changes: it's no longer GPT + ext4, but GPT + btrfs. Moreover, the system is made bootable on UEFI systems, and finally, the output is now called foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw

This will look very similar to the systemd-nspawn invocation, except that this uses full VM virtualization rather than container virtualization. (Note that the way to run a UEFI qemu/kvm instance appears to change all the time and is different on the various distributions. It's quite annoying, and I can't really tell you what the right qemu command line is to make this work on your system.)

Of course, it's not all raw GPT disk images with mkosi. Let's try a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux

Of course, if you generate the image as plain directory you can't boot it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs

In this mode we explicitly pick Fedora as the distribution to use, ask mkosi to generate a compressed GPT image with a root squashfs, compress the result with xz, and generate a SHA256SUMS file with the hashes of the generated artifacts. The package will contain the SSH client as well as everybody's favorite editor.

Now, let's make use of the various mkosi.$SOMETHING files. Let's say we are working on some Automake-based project and want to make it easy to generate a disk image off the development tree with the version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF

And let's add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
make install
EOF
# chmod +x mkosi.build

And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi

Let's try it out:

# systemd-nspawn -bi image.raw

Of course, if you do this you'll notice that building an image like this can be quite slow. And slow build times are actively hurtful to your productivity as a developer. Hence let's make things a bit faster. First, let's make use of a package cache shared between runs:

# mkdir mkosi.cache

Building images now should already be substantially faster (and generate less network traffic) as the packages will now be downloaded only once and reused. However, you'll notice that unpacking all those packages and the rest of the work is still quite slow. But mkosi can help you with that. Simply use mkosi's incremental build feature. In this mode mkosi will make a copy of the build and final images immediately before dropping in your build sources or artifacts, so that building an image becomes a lot quicker: instead of always starting totally from scratch a build will now reuse everything it can reuse from a previous run, and immediately begin with building your sources rather than the build image to build your sources in. To enable the incremental build feature use -i:

# mkosi -i

Note that if you use this option, the package list is not updated anymore from your distribution's servers, as the cached copy is made after all packages are installed, and hence until you actually delete the cached copy the distribution's network servers aren't contacted again and no RPMs or DEBs are downloaded. This means the distribution you use becomes "frozen in time" this way. (Which might be a bad thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you'll notice that it won't overwrite the generated image when it already exists. You can either delete the file yourself first (rm image.raw) or let mkosi do it for you right before building a new image, with mkosi -f. You can also tell mkosi to not only remove any such pre-existing images, but also remove any cached copies of the incremental feature, by using -f twice.

I wrote mkosi originally in order to test systemd, and quickly generate a disk image of various distributions with the most current systemd version from git, without all that affecting my host system. I regularly use mkosi for that today, in incremental mode. The two commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw

And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw

The latter I use only if I want to regenerate everything based on the very newest set of RPMs provided by Fedora, instead of a cached snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git tree: mkosi.default and mkosi.build. This way, any developer who wants to quickly test something with current systemd git, or wants to prepare a patch based on it and test it can check out the systemd repository and simply run mkosi in it and a few minutes later he has a bootable image he can test in systemd-nspawn or KVM. casync has similar files: mkosi.default, mkosi.build.

Random Interesting Features

  1. As mentioned already, mkosi will generate dm-verity enabled disk images if you ask for it. For that use the --verity switch on the command line or Verity= setting in mkosi.default. Of course, dm-verity implies that the root volume is read-only. In this mode the top-level dm-verity hash will be placed along-side the output disk image in a file named the same way, but with the .roothash suffix. If the image is to be created bootable, the root hash is also included on the kernel command line in the roothash= parameter, which current systemd versions can use to both find and activate the root partition in a dm-verity protected way. BTW: it's a good idea to combine this dm-verity mode with the raw_squashfs image mode, to generate a genuinely protected, compressed image suitable for running in your IoT device.

  2. As indicated above, mkosi can automatically create a check-sum file SHA256SUMS for you (--checksum) covering all the files it outputs (which could be the image file itself, a matching .nspawn file using the mkosi.nspawn file mentioned above, as well as the .roothash file for the dm-verity root hash.) It can then optionally sign this with gpg (--sign). Note that systemd's machinectl pull-tar and machinectl pull-raw command can download these files and the SHA256SUMS file automatically and verify things on download. With other words: what mkosi outputs is perfectly ready for downloads using these two systemd commands.

  3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To make use of that, place your X.509 key pair in two files mkosi.secureboot.crt and mkosi.secureboot.key, and set SecureBoot= or --secure-boot. If so, mkosi will sign the kernel/initrd/kernel command line combination during the build. Of course, if you use this mode, you should also use Verity=/--verity=, otherwise the setup makes only partial sense. Note that mkosi will not help you with actually enrolling the keys you use in your UEFI BIOS.

  4. mkosi has minimal support for GIT checkouts: when it recognizes it is run in a git checkout and you use the mkosi.build script stuff, the source tree will be copied into the build image, but will all files excluded by .gitignore removed.

  5. There's support for encryption in place. Use --encrypt= or Encrypt=. Note that the UEFI ESP is never encrypted though, and the root partition only if explicitly requested. The /home and /srv partitions are unconditionally encrypted if that's enabled.

  6. Images may be built with all documentation removed.

  7. The password for the root user and additional kernel command line arguments may be configured for the image to generate.

Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies, listed in the README. Most notably you need a somewhat recent systemd version to make use of its full feature set: systemd 233. Older versions are already packaged for various distributions, but much of what I describe above is only available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn't available in Fedora, but there's a COPR.

Future

It is my intention to continue turning mkosi into a tool suitable for:

  1. Testing and debugging projects
  2. Building images for secure devices
  3. Building portable service images
  4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and systemd/sd-boot native support for A/B IoT style partition setups. The idea is that the combination of systemd, casync and mkosi provides generic building blocks for building secure, auto-updating devices in a generic way from, even though all pieces may be used individually, too.

FAQ

  1. Why are you reinventing the wheel again? This is exactly like $SOMEOTHERPROJECT! — Well, to my knowledge there's no tool that integrates this nicely with your project's development tree, and can do dm-verity and UEFI SecureBoot and all that stuff for you. So nope, I don't think this exactly like $SOMEOTHERPROJECT, thank you very much.

  2. What about creating MBR/DOS partition images? — That's really out of focus to me. This is an exercise in figuring out how generic OSes and devices in the future should be built and an attempt to commoditize OS image building. And no, the future doesn't speak MBR, sorry. That said, I'd be quite interested in adding support for booting on Raspberry Pi, possibly using a hybrid approach, i.e. using a GPT disk label, but arranging things in a way that the Raspberry Pi boot protocol (which is built around DOS partition tables), can still work.

  3. Is this portable? — Well, depends what you mean by portable. No, this tool runs on Linux only, and as it uses systemd-nspawn during the build process it doesn't run on non-systemd systems either. But then again, you should be able to create images for any architecture you like with it, but of course if you want the image bootable on bare-metal systems only systems doing UEFI are supported (but systemd-nspawn should still work fine on them).

  4. Where can I get this stuff? — Try GitHub. And some distributions carry packaged versions, but I think none of them the current v3 yet.

  5. Is this a systemd project? — Yes, it's hosted under the systemd GitHub umbrella. And yes, during run-time systemd-nspawn in a current version is required. But no, the code-bases are separate otherwise, already because systemd is a C project, and mkosi Python.

  6. Requiring systemd 233 is a pretty steep requirement, no? — Yes, but the feature we need kind of matters (systemd-nspawn's --overlay= switch), and again, this isn't supposed to be a tool for legacy systems.

  7. Can I run the resulting images in LXC or Docker? — Humm, I am not an LXC nor Docker guy. If you select directory or subvolume as image type, LXC should be able to boot the generated images just fine, but I didn't try. Last time I looked, Docker doesn't permit running proper init systems as PID 1 inside the container, as they define their own run-time without intention to emulate a proper system. Hence, no I don't think it will work, at least not with an unpatched Docker version. That said, again, don't ask me questions about Docker, it's not precisely my area of expertise, and quite frankly I am not a fan. To my knowledge neither LXC nor Docker are able to run containers directly off GPT disk images, hence the various raw_xyz image types are definitely not compatible with either. That means if you want to generate a single raw disk image that can be booted unmodified both in a container and on bare-metal, then systemd-nspawn is the container manager to go for (specifically, its -i/--image= switch).

Should you care? Is this a tool for you?

Well, that's up to you really.

If you hack on some complex project and need a quick way to compile and run your project on a specific current Linux distribution, then mkosi is an excellent way to do that. Simply drop the mkosi.default and mkosi.build files in your git tree and everything will be easy. (And of course, as indicated above: if the project you are hacking on happens to be called systemd or casync be aware that those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great choice too, as it will make it reasonably easy to generate secure images that are protected against offline modification, by using dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a VM or systemd-nspawn container, or a portable service then mkosi is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd init systems, old VM managers, Docker, … then no, mkosi is not for you, but there are plenty of well-established alternatives around that cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to accept your patches and other contributions.

Oh, and one unrelated last thing: don't forget to submit your talk proposal and/or buy a ticket for All Systems Go! 2017 in Berlin — the conference where things like systemd, casync and mkosi are discussed, along with a variety of other Linux userspace projects used for building systems.

by Lennart Poettering at June 27, 2017 10:00 PM

Audio – Stefan Westerfeld's blog

27.06.2016 beast-0.11.0 released

Beast is a music composition and modular synthesis application. beast-0.11.0 is now available at beast.testbit.eu. Support for Soundfont (.sf2) files has been added. On multicore CPUs, Beast now uses all cores for synthesis, which improves performance. Debian packages also have been added, so installation should be very easy on Debian-like systems. And as always, lots of other improvements and bug fixes went into Beast.

Update: I made a screencast of Beast which shows the basics.

by stw at June 27, 2017 01:17 PM

autostatic.com

RPi 3 and the real time kernel

As a beta tester for MOD I thought it would be cool to play around with netJACK which is supported on the MOD Duo. The MOD Duo can run as a JACK master and you can connect any JACK slave to it as long as it runs a recent version of JACK2. This opens a plethora of possibilities of course. I’m thinking about building a kind of sidecar device to offload some stuff to using netJACK, think of synths like ZynAddSubFX or other CPU greedy plugins like fat1.lv2. But more on that in a later blog post.

So first I need to set up a sidecar device and I sacrificed one of my RPi’s for that, an RPi 3. Flashed an SD card with Raspbian Jessie Lite and started to do some research on the status of real time kernels and the Raspberry Pi because I’d like to use a real time kernel to get sub 5ms system latency. I compiled real time kernels for the RPi before but you had to jump through some hoops to get those running so I hoped things would have improved somewhat. Well, that’s not the case so after having compiled a first real time kernel the RPi froze as soon as I tried to runapt-get install rt-tests. After having applied a patch to fix how the RPi folks implemented the FIQ system the kernel compiled without issues:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

And the RPi seems to run stable with acceptable latencies:

Histogram of the latency on the RPi with a real time kernel during 300000 cyclictest loopsHistogram of the latency on the RPi with a real time kernel during 300000 cyclictest loops

So that’s a maximum latency of 75 µs, not bad. I also spotted some higher values around 100 but that’s still okay for this project. The histogram was created with mklatencyplot.bash. I used a different invocation of cyclictest though:

cyclictest -Sm -p 80 -n -i 500 -l 300000

And I ran hackbench in the background to create some load on the RPi:

(while true; do hackbench > /dev/null; done) &

Compiling a real time kernel for the RPi is still not a trivial thing to do and it doesn’t help that the few howto’s on the interwebs are mostly copy-paste work, incomplete and contain routines that are unclear or even unnecessary. One thing that struck me too is that the howto’s about building kernels for RPi’s running Raspbian don’t mention the make deb-pkg routine to build a real time kernel. This will create deb packages that are just so much easier to transfer and install then rsync’ing the kernel image and modules. Let’s break down how I built a real time kernel for the RPi 3.

First you’ll need to git clone the Raspberry Pi kernel repository:

git clone -b 'rpi-4.9.y' --depth 1 https://github.com/raspberrypi/linux.git

This will only clone the rpi-4.9.y branch into a directory called linux without any history so you’re not pulling in hundreds of megs of data. You will also need to clone the tools repository which contains the compiler we need to build a kernel for the Raspberry Pi:

git clone https://github.com/raspberrypi/tools.git

This will end up in the tools directory. Next step is setting some environment variables so subsequent make commands pick those up:

export KERNEL=kernel7
export ARCH=arm
export CROSS_COMPILE=/path/to/tools/arm-bcm2708/gcc-linaro-arm-linux-gnueabihf-raspbian/bin/arm-linux-gnueabihf-
export CONCURRENCY_LEVEL=$(nproc)

The KERNEL variable is needed to create the initial kernel config. The ARCH variable is to indicate which architecture should be used. The CROSS_COMPILE variable indicates where the compiler can be found. The CONCURRENCY_LEVEL variable is set to the number of cores to speed up certain make routines like cleaning up or installing the modules (not the number of jobs, that is done with the -j option of make).

Now that the environment variables are set we can create the initial kernel config:

cd linux
make bcm2709_defconfig

This will create a .config inside the linux directory that holds the initial kernel configuration. Now download the real time patch set and apply it:

cd ..
wget https://www.kernel.org/pub/linux/kernel/projects/rt/4.9/patch-4.9.33-rt23.patch.xz
cd linux
xzcat ../patch-4.9.33-rt23.patch.xz | patch -p1

Most howto’s now continue with building the kernel but that will result in a kernel that will freeze your RPi because of the FIQ system implementation that causes lock ups of the RPi when using threaded interrupts which is the case with real time kernels. That part needs to be patched so download the patch and dry-run it:

cd ..
wget https://www.osadl.org/monitoring/patches/rbs3s/usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch
cd linux
patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1 --dry-run

You will notice one hunk will fail, you will have to add that stanza manually so note which hunk it is for which file and at which line it should be added. Now apply the patch:

patch -i ../usb-dwc_otg-fix-system-lockup-when-interrupts-are-threaded.patch -p1

And add the failed hunk manually with your favorite editor. With the FIQ patch in place we’re almost set for compiling the kernel but before we can move on to that step we need to modify the kernel configuration to enable the real time patch set. I prefer doing that with make menuconfig. Then select Kernel Features - Preemption Model - Fully Preemptible Kernel (RT) and select Exit twice. If you’re asked if you want to save your config then confirm. In the Kernel features menu you could also set the the timer frequency to 1000 Hz if you wish, apparently this could improve USB throughput on the RPi (unconfirmed, needs reference). For real time audio and MIDI this setting is irrelevant nowadays though as almost all audio and MIDI applications use the hr-timer module which has a way higher resolution.

With our configuration saved we can start compiling. Clean up first, then disable some debugging options which could cause some overhead, compile the kernel and finally create ready to install deb packages:

make clean
scripts/config --disable DEBUG_INFO
make -j$(nproc) deb-pkg

Sit back, enjoy a cuppa and when building has finished without errors deb packages should be created in the directory above the linux one. Copy the deb packages to your RPi and install them on the RPi with dpkg -i. Open up /boot/config.txt and add the following line to it:

kernel=vmlinuz-4.9.33-rt23-v7+

Now reboot your RPi and it should boot with the realtime kernel. You can check with uname -a:

Linux raspberrypi 4.9.33-rt23-v7+ #2 SMP PREEMPT RT Sun Jun 25 09:45:58 CEST 2017 armv7l GNU/Linux

Since Rasbian uses almost the same kernel source as the one we just built it is not necessary to copy any dtb files. Also running mkknlimg is not necessary anymore, the RPi boot process can handle vmlinuz files just fine.

The basis of the sidecar unit is now done. Next up is tweaking the OS and setting up netJACK.

Edit: there’s a thread on LinuxMusicians referring to this article which already contains some very useful additional information.

The post RPi 3 and the real time kernel appeared first on autostatic.com.

by jeremy at June 27, 2017 09:25 AM

June 22, 2017

GStreamer News

GStreamer 1.12.1 stable release (binaries)

Pre-built binary images of the 1.12.1 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 22, 2017 01:15 PM

June 21, 2017

rncbc.org

Vee One Suite 0.8.3 - A Summer'17 release


Howdy!

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are into a hot Summer'17 release!

Still available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they go again!

 

synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.3 (summer'17) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

change-log:

  • Added StartupWMClass entry to desktop file.
  • Long overdue, some brand new and fundamental icons revamp.

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

 

samplv1 - an old-school polyphonic sampler

samplv1 0.8.3 (summer'17) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

change-log:

  • Added StartupWMClass entry to desktop file.
  • Long overdue, some brand new and fundamental icons revamp.
  • Play (current sample) menu item has been added to sample display right-click context-menu as for triggering it as an internal MIDI note-on/off event.

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

 

drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.3 (summer'17) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

change-log:

  • Added StartupWMClass entry to desktop file.
  • Long overdue, some brand new and fundamental icons revamp.
  • Left-clicking on each element fake-LED now triggers it as an internal MIDI note-on/off event. Play (current element) menu item has been also added to the the element list and sample display right-click context-menu.

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

 

Enjoy && have fun ;)

by rncbc at June 21, 2017 06:00 PM

June 20, 2017

Audio – Stefan Westerfeld's blog

20.06.2017 spectmorph-0.3.3 released

A new version of SpectMorph, my audio morphing software is now available on www.spectmorph.org. The main improvement is that SpectMorph supports now portamento and vibrato. For VST hosts with MPE (Bitwig), the pitch of each note can be controlled by the sequencer. So sliding from a C major chord to a D minor chord is possible. There is also a new portamento/mono mode, which should work with any host.

by stw at June 20, 2017 03:29 PM

GStreamer News

GStreamer 1.12.1 stable release

The GStreamer team is pleased to announce the first bugfix release in the stable 1.12 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.12.x.

See /releases/1.12/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

June 20, 2017 09:30 AM

June 19, 2017

Pid Eins

All Systems Go! 2017 CfP Open

The All Systems Go! 2017 Call for Participation is Now Open!

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

by Lennart Poettering at June 19, 2017 10:00 PM

casync — A tool for distributing file system images

Introducing casync

In the past months I have been working on a new project: casync. casync takes inspiration from the popular rsync file synchronization tool as well as the probably even more popular git revision control system. It combines the idea of the rsync algorithm with the idea of git-style content-addressable file systems, and creates a new system for efficiently storing and delivering file system images, optimized for high-frequency update cycles over the Internet. Its current focus is on delivering IoT, container, VM, application, portable service or OS images, but I hope to extend it later in a generic fashion to become useful for backups and home directory synchronization as well (but more about that later).

The basic technological building blocks casync is built from are neither new nor particularly innovative (at least not anymore), however the way casync combines them is different from existing tools, and that's what makes it useful for a variety of use-cases that other tools can't cover that well.

Why?

I created casync after studying how today's popular tools store and deliver file system images. To briefly name a few: Docker has a layered tarball approach, OSTree serves the individual files directly via HTTP and maintains packed deltas to speed up updates, while other systems operate on the block layer and place raw squashfs images (or other archival file systems, such as IS09660) for download on HTTP shares (in the better cases combined with zsync data).

Neither of these approaches appeared fully convincing to me when used in high-frequency update cycle systems. In such systems, it is important to optimize towards a couple of goals:

  1. Most importantly, make updates cheap traffic-wise (for this most tools use image deltas of some form)
  2. Put boundaries on disk space usage on servers (keeping deltas between all version combinations clients might want to run updates between, would suggest keeping an exponentially growing amount of deltas on servers)
  3. Put boundaries on disk space usage on clients
  4. Be friendly to Content Delivery Networks (CDNs), i.e. serve neither too many small nor too many overly large files, and only require the most basic form of HTTP. Provide the repository administrator with high-level knobs to tune the average file size delivered.
  5. Simplicity to use for users, repository administrators and developers

I don't think any of the tools mentioned above are really good on more than a small subset of these points.

Specifically: Docker's layered tarball approach dumps the "delta" question onto the feet of the image creators: the best way to make your image downloads minimal is basing your work on an existing image clients might already have, and inherit its resources, maintaining full history. Here, revision control (a tool for the developer) is intermingled with update management (a concept for optimizing production delivery). As container histories grow individual deltas are likely to stay small, but on the other hand a brand-new deployment usually requires downloading the full history onto the deployment system, even though there's no use for it there, and likely requires substantially more disk space and download sizes.

OSTree's serving of individual files is unfriendly to CDNs (as many small files in file trees cause an explosion of HTTP GET requests). To counter that OSTree supports placing pre-calculated delta images between selected revisions on the delivery servers, which means a certain amount of revision management, that leaks into the clients.

Delivering direct squashfs (or other file system) images is almost beautifully simple, but of course means every update requires a full download of the newest image, which is both bad for disk usage and generated traffic. Enhancing it with zsync makes this a much better option, as it can reduce generated traffic substantially at very little cost of history/meta-data (no explicit deltas between a large number of versions need to be prepared server side). On the other hand server requirements in disk space and functionality (HTTP Range requests) are minus points for the use-case I am interested in.

(Note: all the mentioned systems have great properties, and it's not my intention to badmouth them. They only point I am trying to make is that for the use case I care about — file system image delivery with high high frequency update-cycles — each system comes with certain drawbacks.)

Security & Reproducibility

Besides the issues pointed out above I wasn't happy with the security and reproducibility properties of these systems. In today's world where security breaches involving hacking and breaking into connected systems happen every day, an image delivery system that cannot make strong guarantees regarding data integrity is out of date. Specifically, the tarball format is famously nondeterministic: the very same file tree can result in any number of different valid serializations depending on the tool used, its version and the underlying OS and file system. Some tar implementations attempt to correct that by guaranteeing that each file tree maps to exactly one valid serialization, but such a property is always only specific to the tool used. I strongly believe that any good update system must guarantee on every single link of the chain that there's only one valid representation of the data to deliver, that can easily be verified.

What casync Is

So much about the background why I created casync. Now, let's have a look what casync actually is like, and what it does. Here's the brief technical overview:

Encoding: Let's take a large linear data stream, split it into variable-sized chunks (the size of each being a function of the chunk's contents), and store these chunks in individual, compressed files in some directory, each file named after a strong hash value of its contents, so that the hash value may be used to as key for retrieving the full chunk data. Let's call this directory a "chunk store". At the same time, generate a "chunk index" file that lists these chunk hash values plus their respective chunk sizes in a simple linear array. The chunking algorithm is supposed to create variable, but similarly sized chunks from the data stream, and do so in a way that the same data results in the same chunks even if placed at varying offsets. For more information see this blog story.

Decoding: Let's take the chunk index file, and reassemble the large linear data stream by concatenating the uncompressed chunks retrieved from the chunk store, keyed by the listed chunk hash values.

As an extra twist, we introduce a well-defined, reproducible, random-access serialization format for file trees (think: a more modern tar), to permit efficient, stable storage of complete file trees in the system, simply by serializing them and then passing them into the encoding step explained above.

Finally, let's put all this on the network: for each image you want to deliver, generate a chunk index file and place it on an HTTP server. Do the same with the chunk store, and share it between the various index files you intend to deliver.

Why bother with all of this? Streams with similar contents will result in mostly the same chunk files in the chunk store. This means it is very efficient to store many related versions of a data stream in the same chunk store, thus minimizing disk usage. Moreover, when transferring linear data streams chunks already known on the receiving side can be made use of, thus minimizing network traffic.

Why is this different from rsync or OSTree, or similar tools? Well, one major difference between casync and those tools is that we remove file boundaries before chunking things up. This means that small files are lumped together with their siblings and large files are chopped into pieces, which permits us to recognize similarities in files and directories beyond file boundaries, and makes sure our chunk sizes are pretty evenly distributed, without the file boundaries affecting them.

The "chunking" algorithm is based on a the buzhash rolling hash function. SHA256 is used as strong hash function to generate digests of the chunks. xz is used to compress the individual chunks.

Here's a diagram, hopefully explaining a bit how the encoding process works, wasn't it for my crappy drawing skills:

Diagram

The diagram shows the encoding process from top to bottom. It starts with a block device or a file tree, which is then serialized and chunked up into variable sized blocks. The compressed chunks are then placed in the chunk store, while a chunk index file is written listing the chunk hashes in order. (The original SVG of this graphic may be found here.)

Details

Note that casync operates on two different layers, depending on the use-case of the user:

  1. You may use it on the block layer. In this case the raw block data on disk is taken as-is, read directly from the block device, split into chunks as described above, compressed, stored and delivered.

  2. You may use it on the file system layer. In this case, the file tree serialization format mentioned above comes into play: the file tree is serialized depth-first (much like tar would do it) and then split into chunks, compressed, stored and delivered.

The fact that it may be used on both the block and file system layer opens it up for a variety of different use-cases. In the VM and IoT ecosystems shipping images as block-level serializations is more common, while in the container and application world file-system-level serializations are more typically used.

Chunk index files referring to block-layer serializations carry the .caibx suffix, while chunk index files referring to file system serializations carry the .caidx suffix. Note that you may also use casync as direct tar replacement, i.e. without the chunking, just generating the plain linear file tree serialization. Such files carry the .catar suffix. Internally .caibx are identical to .caidx files, the only difference is semantical: .caidx files describe a .catar file, while .caibx files may describe any other blob. Finally, chunk stores are directories carrying the .castr suffix.

Features

Here are a couple of other features casync has:

  1. When downloading a new image you may use casync's --seed= feature: each block device, file, or directory specified is processed using the same chunking logic described above, and is used as preferred source when putting together the downloaded image locally, avoiding network transfer of it. This of course is useful whenever updating an image: simply specify one or more old versions as seed and only download the chunks that truly changed since then. Note that using seeds requires no history relationship between seed and the new image to download. This has major benefits: you can even use it to speed up downloads of relatively foreign and unrelated data. For example, when downloading a container image built using Ubuntu you can use your Fedora host OS tree in /usr as seed, and casync will automatically use whatever it can from that tree, for example timezone and locale data that tends to be identical between distributions. Example: casync extract http://example.com/myimage.caibx --seed=/dev/sda1 /dev/sda2. This will place the block-layer image described by the indicated URL in the /dev/sda2 partition, using the existing /dev/sda1 data as seeding source. An invocation like this could be typically used by IoT systems with an A/B partition setup. Example 2: casync extract http://example.com/mycontainer-v3.caidx --seed=/srv/container-v1 --seed=/srv/container-v2 /src/container-v3, is very similar but operates on the file system layer, and uses two old container versions to seed the new version.

  2. When operating on the file system level, the user has fine-grained control on the meta-data included in the serialization. This is relevant since different use-cases tend to require a different set of saved/restored meta-data. For example, when shipping OS images, file access bits/ACLs and ownership matter, while file modification times hurt. When doing personal backups OTOH file ownership matters little but file modification times are important. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the meta-data cannot be replayed in full. Due to this, casync provides a set of --with= and --without= parameters that allow fine-grained control of the data stored in the file tree serialization, including the granularity of modification times and more. The precise set of selected meta-data features is also always part of the serialization, so that seeding can work correctly and automatically.

  3. casync tries to be as accurate as possible when storing file system meta-data. This means that besides the usual baseline of file meta-data (file ownership and access bits), and more advanced features (extended attributes, ACLs, file capabilities) a number of more exotic data is stored as well, including Linux chattr(1) file attributes, as well as FAT file attributes (you may wonder why the latter? — EFI is FAT, and /efi is part of the comprehensive serialization of any host). In the future I intend to extend this further, for example storing btrfs sub-volume information where available. Note that as described above every single type of meta-data may be turned off and on individually, hence if you don't need FAT file bits (and I figure it's pretty likely you don't), then they won't be stored.

  4. The user creating .caidx or .caibx files may control the desired average chunk length (before compression) freely, using the --chunk-size= parameter. Smaller chunks increase the number of generated files in the chunk store and increase HTTP GET load on the server, but also ensure that sharing between similar images is improved, as identical patterns in the images stored are more likely to be recognized. By default casync will use a 64K average chunk size. Tweaking this can be particularly useful when adapting the system to specific CDNs, or when delivering compressed disk images such as squashfs (see below).

  5. Emphasis is placed on making all invocations reproducible, well-defined and strictly deterministic. As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other use-cases. For example, the casync digest command may be used to calculate a hash value identifying a specific directory in all desired detail (use --with= and --without to pick the desired detail). Moreover the casync mtree command may be used to generate a BSD mtree(5) compatible manifest of a directory tree, .caidx or .catar file.

  6. The file system serialization format is nicely composable. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file sub-trees located at the top of the tree, with zero meta-data references from any of these serializations into the others. This property is essential to ensure maximum reuse of chunks when similar trees are serialized.

  7. When extracting file trees or disk image files, casync will automatically create reflinks from any specified seeds if the underlying file system supports it (such as btrfs, ocfs, and future xfs). After all, instead of copying the desired data from the seed, we can just tell the file system to link up the relevant blocks. This works both when extracting .caidx and .caibx files — the latter of course only when the extracted disk image is placed in a regular raw image file on disk, rather than directly on a plain block device, as plain block devices do not know the concept of reflinks.

  8. Optionally, when extracting file trees, casync can create traditional UNIX hard-links for identical files in specified seeds (--hardlink=yes). This works on all UNIX file systems, and can save substantial amounts of disk space. However, this only works for very specific use-cases where disk images are considered read-only after extraction, as any changes made to one tree will propagate to all other trees sharing the same hard-linked files, as that's the nature of hard-links. In this mode, casync exposes OSTree-like behavior, which is built heavily around read-only hard-link trees.

  9. casync tries to be smart when choosing what to include in file system images. Implicitly, file systems such as procfs and sysfs are excluded from serialization, as they expose API objects, not real files. Moreover, the "nodump" (+d) chattr(1) flag is honored by default, permitting users to mark files to exclude from serialization.

  10. When creating and extracting file trees casync may apply an automatic or explicit UID/GID shift. This is particularly useful when transferring container image for use with Linux user name-spacing.

  11. In addition to local operation, casync currently supports HTTP, HTTPS, FTP and ssh natively for downloading chunk index files and chunks (the ssh mode requires installing casync on the remote host, though, but an sftp mode not requiring that should be easy to add). When creating index files or chunks, only ssh is supported as remote back-end.

  12. When operating on block-layer images, you may expose locally or remotely stored images as local block devices. Example: casync mkdev http://example.com/myimage.caibx exposes the disk image described by the indicated URL as local block device in /dev, which you then may use the usual block device tools on, such as mount or fdisk (only read-only though). Chunks are downloaded on access with high priority, and at low priority when idle in the background. Note that in this mode, casync also plays a role similar to "dm-verity", as all blocks are validated against the strong digests in the chunk index file before passing them on to the kernel's block layer. This feature is implemented though Linux' NBD kernel facility.

  13. Similar, when operating on file-system-layer images, you may mount locally or remotely stored images as regular file systems. Example: casync mount http://example.com/mytree.caidx /srv/mytree mounts the file tree image described by the indicated URL as a local directory /srv/mytree. This feature is implemented though Linux' FUSE kernel facility. Note that special care is taken that the images exposed this way can be packed up again with casync make and are guaranteed to return the bit-by-bit exact same serialization again that it was mounted from. No data is lost or changed while passing things through FUSE (OK, strictly speaking this is a lie, we do lose ACLs, but that's hopefully just a temporary gap to be fixed soon).

  14. In IoT A/B fixed size partition setups the file systems placed in the two partitions are usually much shorter than the partition size, in order to keep some room for later, larger updates. casync is able to analyze the super-block of a number of common file systems in order to determine the actual size of a file system stored on a block device, so that writing a file system to such a partition and reading it back again will result in reproducible data. Moreover this speeds up the seeding process, as there's little point in seeding the white-space after the file system within the partition.

Example Command Lines

Here's how to use casync, explained with a few examples:

$ casync make foobar.caidx /some/directory

This will create a chunk index file foobar.caidx in the local directory, and populate the chunk store directory default.castr located next to it with the chunks of the serialization (you can change the name for the store directory with --store= if you like). This command operates on the file-system level. A similar command operating on the block level:

$ casync make foobar.caibx /dev/sda1

This command creates a chunk index file foobar.caibx in the local directory describing the current contents of the /dev/sda1 block device, and populates default.castr in the same way as above. Note that you may as well read a raw disk image from a file instead of a block device:

$ casync make foobar.caibx myimage.raw

To reconstruct the original file tree from the .caidx file and the chunk store of the first command, use:

$ casync extract foobar.caidx /some/other/directory

And similar for the block-layer version:

$ casync extract foobar.caibx /dev/sdb1

or, to extract the block-layer version into a raw disk image:

$ casync extract foobar.caibx myotherimage.raw

The above are the most basic commands, operating on local data only. Now let's make this more interesting, and reference remote resources:

$ casync extract http://example.com/images/foobar.caidx /some/other/directory

This extracts the specified .caidx onto a local directory. This of course assumes that foobar.caidx was uploaded to the HTTP server in the first place, along with the chunk store. You can use any command you like to accomplish that, for example scp or rsync. Alternatively, you can let casync do this directly when generating the chunk index:

$ casync make ssh.example.com:images/foobar.caidx /some/directory

This will use ssh to connect to the ssh.example.com server, and then places the .caidx file and the chunks on it. Note that this mode of operation is "smart": this scheme will only upload chunks currently missing on the server side, and not re-transmit what already is available.

Note that you can always configure the precise path or URL of the chunk store via the --store= option. If you do not do that, then the store path is automatically derived from the path or URL: the last component of the path or URL is replaced by default.castr.

Of course, when extracting .caidx or .caibx files from remote sources, using a local seed is advisable:

$ casync extract http://example.com/images/foobar.caidx --seed=/some/exising/directory /some/other/directory

Or on the block layer:

$ casync extract http://example.com/images/foobar.caibx --seed=/dev/sda1 /dev/sdb2

When creating chunk indexes on the file system layer casync will by default store meta-data as accurately as possible. Let's create a chunk index with reduced meta-data:

$ casync make foobar.caidx --with=sec-time --with=symlinks --with=read-only /some/dir

This command will create a chunk index for a file tree serialization that has three features above the absolute baseline supported: 1s granularity time-stamps, symbolic links and a single read-only bit. In this mode, all the other meta-data bits are not stored, including nanosecond time-stamps, full UNIX permission bits, file ownership or even ACLs or extended attributes.

Now let's make a .caidx file available locally as a mounted file system, without extracting it:

$ casync mount http://example.comf/images/foobar.caidx /mnt/foobar

And similar, let's make a .caibx file available locally as a block device:

$ casync mkdev http://example.comf/images/foobar.caibx

This will create a block device in /dev and print the used device node path to STDOUT.

As mentioned, casync is big about reproducibility. Let's make use of that to calculate the a digest identifying a very specific version of a file tree:

$ casync digest .

This digest will include all meta-data bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what meta-data to include:

$ casync digest --with=unix .

This makes use of the --with=unix shortcut for selecting meta-data fields. Specifying --with-unix= selects all meta-data that traditional UNIX file systems support. It is a shortcut for writing out: --with=16bit-uids --with=permissions --with=sec-time --with=symlinks --with=device-nodes --with=fifos --with=sockets.

Note that when calculating digests or creating chunk indexes you may also use the negative --without= option to remove specific features but start from the most precise:

$ casync digest --without=flag-immutable

This generates a digest with the most accurate meta-data, but leaves one feature out: chattr(1)'s immutable (+i) file flag.

To list the contents of a .caidx file use a command like the following:

$ casync list http://example.com/images/foobar.caidx

or

$ casync mtree http://example.com/images/foobar.caidx

The former command will generate a brief list of files and directories, not too different from tar t or ls -al in its output. The latter command will generate a BSD mtree(5) compatible manifest. Note that casync actually stores substantially more file meta-data than mtree files can express, though.

What casync isn't

  1. casync is not an attempt to minimize serialization and downloaded deltas to the extreme. Instead, the tool is supposed to find a good middle ground, that is good on traffic and disk space, but not at the price of convenience or requiring explicit revision control. If you care about updates that are absolutely minimal, there are binary delta systems around that might be an option for you, such as Google's Courgette.

  2. casync is not a replacement for rsync, or git or zsync or anything like that. They have very different use-cases and semantics. For example, rsync permits you to directly synchronize two file trees remotely. casync just cannot do that, and it is unlikely it every will.

Where next?

casync is supposed to be a generic synchronization tool. Its primary focus for now is delivery of OS images, but I'd like to make it useful for a couple other use-cases, too. Specifically:

  1. To make the tool useful for backups, encryption is missing. I have pretty concrete plans how to add that. When implemented, the tool might become an alternative to restic, BorgBackup or tarsnap.

  2. Right now, if you want to deploy casync in real-life, you still need to validate the downloaded .caidx or .caibx file yourself, for example with some gpg signature. It is my intention to integrate with gpg in a minimal way so that signing and verifying chunk index files is done automatically.

  3. In the longer run, I'd like to build an automatic synchronizer for $HOME between systems from this. Each $HOME instance would be stored automatically in regular intervals in the cloud using casync, and conflicts would be resolved locally.

  4. casync is written in a shared library style, but it is not yet built as one. Specifically this means that almost all of casync's functionality is supposed to be available as C API soon, and applications can process casync files on every level. It is my intention to make this library useful enough so that it will be easy to write a module for GNOME's gvfs subsystem in order to make remote or local .caidx files directly available to applications (as an alternative to casync mount). In fact the idea is to make this all flexible enough that even the remoting back-ends can be replaced easily, for example to replace casync's default HTTP/HTTPS back-ends built on CURL with GNOME's own HTTP implementation, in order to share cookies, certificates, … There's also an alternative method to integrate with casync in place already: simply invoke casync as a sub-process. casync will inform you about a certain set of state changes using a mechanism compatible with sd_notify(3). In future it will also propagate progress data this way and more.

  5. I intend to a add a new seeding back-end that sources chunks from the local network. After downloading the new .caidx file off the Internet casync would then search for the listed chunks on the local network first before retrieving them from the Internet. This should speed things up on all installations that have multiple similar systems deployed in the same network.

Further plans are listed tersely in the TODO file.

FAQ:

  1. Is this a systemd project?casync is hosted under the github systemd umbrella, and the projects share the same coding style. However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

  2. Is casync portable? — At the moment: no. I only run Linux and that's what I code for. That said, I am open to accepting portability patches (unlike for systemd, which doesn't really make sense on non-Linux systems), as long as they don't interfere too much with the way casync works. Specifically this means that I am not too enthusiastic about merging portability patches for OSes lacking the openat(2) family of APIs.

  3. Does casync require reflink-capable file systems to work, such as btrfs? — No it doesn't. The reflink magic in casync is employed when the file system permits it, and it's good to have it, but it's not a requirement, and casync will implicitly fall back to copying when it isn't available. Note that casync supports a number of file system features on a variety of file systems that aren't available everywhere, for example FAT's system/hidden file flags or xfs's projinherit file flag.

  4. Is casync stable? — I just tagged the first, initial release. While I have been working on it since quite some time and it is quite featureful, this is the first time I advertise it publicly, and it hence received very little testing outside of its own test suite. I am also not fully ready to commit to the stability of the current serialization or chunk index format. I don't see any breakages coming for it though. casync is pretty light on documentation right now, and does not even have a man page. I also intend to correct that soon.

  5. Are the .caidx/.caibx and .catar file formats open and documented?casync is Open Source, so if you want to know the precise format, have a look at the sources for now. It's definitely my intention to add comprehensive docs for both formats however. Don't forget this is just the initial version right now.

  6. casync is just like $SOMEOTHERTOOL! Why are you reinventing the wheel (again)? — Well, because casync isn't "just like" some other tool. I am pretty sure I did my homework, and that there is no tool just like casync right now. The tools coming closest are probably rsync, zsync, tarsnap, restic, but they are quite different beasts each.

  7. Why did you invent your own serialization format for file trees? Why don't you just use tar? — That's a good question, and other systems — most prominently tarsnap — do that. However, as mentioned above tar doesn't enforce reproducibility. It also doesn't really do random access: if you want to access some specific file you need to read every single byte stored before it in the tar archive to find it, which is of course very expensive. The serialization casync implements places a focus on reproducibility, random access, and meta-data control. Much like traditional tar it can still be generated and extracted in a stream fashion though.

  8. Does casync save/restore SELinux/SMACK file labels? — At the moment not. That's not because I wouldn't want it to, but simply because I am not a guru of either of these systems, and didn't want to implement something I do not fully grok nor can test. If you look at the sources you'll find that there's already some definitions in place that keep room for them though. I'd be delighted to accept a patch implementing this fully.

  9. What about delivering squashfs images? How well does chunking work on compressed serializations? – That's a very good point! Usually, if you apply the a chunking algorithm to a compressed data stream (let's say a tar.gz file), then changing a single bit at the front will propagate into the entire remainder of the file, so that minimal changes will explode into major changes. Thankfully this doesn't apply that strictly to squashfs images, as it provides random access to files and directories and thus breaks up the compression streams in regular intervals to make seeking easy. This fact is beneficial for systems employing chunking, such as casync as this means single bit changes might affect their vicinity but will not explode in an unbounded fashion. In order achieve best results when delivering squashfs images through casync the block sizes of squashfs and the chunks sizes of casync should be matched up (using casync's --chunk-size= option). How precisely to choose both values is left a research subject for the user, for now.

  10. What does the name casync mean? – It's a synchronizing tool, hence the -sync suffix, following rsync's naming. It makes use of the content-addressable concept of git hence the ca- prefix.

  11. Where can I get this stuff? Is it already packaged? – Check out the sources on GitHub. I just tagged the first version. Martin Pitt has packaged casync for Ubuntu. There is also an ArchLinux package. Zbigniew Jędrzejewski-Szmek has prepared a Fedora RPM that hopefully will soon be included in the distribution.

Should you care? Is this a tool for you?

Well, that's up to you really. If you are involved with projects that need to deliver IoT, VM, container, application or OS images, then maybe this is a great tool for you — but other options exist, some of which are linked above.

Note that casync is an Open Source project: if it doesn't do exactly what you need, prepare a patch that adds what you need, and we'll consider it.

If you are interested in the project and would like to talk about this in person, I'll be presenting casync soon at Kinvolk's Linux Technologies Meetup in Berlin, Germany. You are invited. I also intend to talk about it at All Systems Go!, also in Berlin.

by Lennart Poettering at June 19, 2017 10:00 PM

June 18, 2017

GStreamer News

GStreamer 1.10.5 stable release (binaries)

Pre-built binary images of the 1.10.5 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

June 18, 2017 10:00 PM

June 17, 2017

KXStudio News

DPF-Plugins v1.1 released

With some minor things finally done and all reported bugs squashed, it's time to tag a new release of DPF-Plugins.

The initial 1.0 version was not really advertised/publicized before, as there were still a few things I wanted done first - but they were already usable as-is.
The base framework used by these plugins (DPF) will get some deep changes soon, so better to have this release out now.

I will not write a changelog here, it was just many small changes here and there for all the plugins since v1.0.
Just think of this release as the initial one. :P

The source code plus Linux, macOS and Windows binaries can be downloaded at https://github.com/DISTRHO/DPF-Plugins/releases/tag/v1.1.
The plugins are released as LADSPA, DSSI, LV2, VST2 and JACK standalone.

As this is the first time I show off the plugins like this, let's go through them a little bit...
The order shown is more or less the order in which they were made.
Note that most plugins here were made/ported as a learning exercise, so not everything is new.
Many thanks to António Saraiva for the design of some of these interfaces!

Mini-Series

This is a collection of small but useful plugins, based on the good old LOSER-Dev Plugins.
This collection currently includes 3 Band EQ, 3 Band Splitter and Ping Pong Pan.

3bandeq 3bandsplitter pingpongpan

MVerb

Studio quality, open-source reverb.
Its release was intended to provide a practical demonstration of Dattorro’s figure-of-eight reverb structure and provide the open source community with a high quality reverb.
This is a DPF'ied build of the original MVerb plugin, allowing a proper Linux version with UI.

mverb

Nekobi

Simple single-oscillator synth based on the Roland TB-303.
This is a DPF'ied build of the nekobee project, allowing LV2 and VST builds of the plugin, plus a nicer UI with a simple cat animation. ;)

nekobi

Kars

Simple karplus-strong plucked string synth.
This is a DPF'ied build of the karplong DSSI example synth, written by Chris Cannam.
It implements the basic Karplus-Strong plucked-string synthesis algorithm (Kevin Karplus & Alex Strong, "Digital Synthesis of Plucked-String and Drum Timbres", Computer Music Journal 1983).

kars

ndc-Plugs

DPF'ied ports of some plugins from Niall Moody.
See http://www.niallmoody.com/ndcplugs/plugins.htm for the original author's page.
This collection currently includes Amplitude Imposer, Cycle Shifter and Soul Force plugins.

amplitudeimposer cycleshifter soulforce

ProM

projectM is an awesome music visualizer.
This plugin makes it work as an audio plugin (LV2 and VST).
prom

glBars

This is an OpenGL bars visualization plugin (as seen in XMMS and XBMC/Kodi).
Adapted from the jack_glbars project by Nedko Arnaudov.
glbars

by falkTX at June 17, 2017 12:36 PM

June 15, 2017

ardour

Ardour 5.10 released

We are pleased to announce the availability of Ardour 5.10. This is primarily a bug-fix release, with several important fixes for recent selection/cut/copy/paste regressions along with fixes for many long standing issues large and small.

This release also sees the arrival of VCA slave automation, along with improvements in overall VCA master/slave behaviour. There are also significant extensions to Ardour's OSC support.

Read more below for the full list of features, improvements and fixes.

Download  

read more

by paul at June 15, 2017 01:40 PM

GStreamer News

GStreamer 1.10.5 stable release

The GStreamer team is pleased to announce the fifth bugfix release in the stable 1.10 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.10.0. It is most likely the last release in the stable 1.10 release series

See /releases/1.10/ for the full release notes.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

June 15, 2017 11:30 AM

June 14, 2017

blog4

June 10, 2017

KXStudio News

KXStudio 14.04.5 release and future plans

Hello there, it's time for another KXStudio ISO release! KXStudio 14.04.5 is here!

Lots have changed in the applications and plugins for Linux Audio (even in KXStudio itself), so it was about time to see those ISO images updated.
Behind the scenes, from what the user can see, it might appear as nothing has truly changed. After all, this is an updated image still based on Ubuntu 14.04, like those from 2 years ago.
But we had a really big amount of releases for our beloved software, enough to deserve this small ISO update.
There is no list of changes this time, sorry. The main thing worth mentioning is that base system is exactly the same, with only applications and plugins updated.
You know the saying - if ain't broken, don't fix it!

Before you ask.. no, there won't be a 16.04 based ISO release.
When 2016 started KDE5 was not in a good enough shape, and it would need a lot of work (and time) to port all the changes made for KDE4 into KDE5.
KDE5 is a lot better now than it used to be, but we missed the opportunity there.

The current plan is to slowly migrate everything we have into KDE5 (meta-packages, scripts, tweaks, artwork, etc) and do a new ISO release in May 2018.
(Yes, this means using Ubuntu 18.04 as base)
The choice of KDE Plasma as desktop environment is not set in stone, other (lighter) desktops have appeared recently that will be considered.
In the end it depends if it will be stable and good enough for audio production.

You can download the new ISOs on the KXStudio website, at http://kxstudio.linuxaudio.org/Downloads#LiveDVD.

And that's it for now.
We hope you enjoy KXStudio, being it the ISO "distribution" release or the repositories.

by falkTX at June 10, 2017 10:15 PM

June 09, 2017

Linux – CDM Create Digital Music

Ableton have now made it easy for any developer to work with Push 2

You know Ableton Push 2 will work when it’s plugged into a computer and you’re running Ableton Live. You get bi-directional feedback on the lit pads and on the screen. But Ableton have also quietly made it possible for any developer to make Push 2 work – without even requiring drivers – on any software, on virtually any platform. And a new library is the final piece in making that easy.

Even if you’re not a developer, that’s big news – because it means that you’ll likely see solutions for using Push 2 with more than just Ableton Live. That not only improves Push as an investment, but ensures that it doesn’t collect dust or turn into a paperweight when you’re using other software – now or down the road.

And it could also mean you don’t always need a computer handy. Push 2 uses standards supported on every operating system, so this could mean operation with an iPad or a Raspberry Pi. That’s really what this post-PC thing is all about. The laptop still might be the best bang-for-your-buck equation in the studio, but maybe live you want something in the form of a stompbox, or something that goes on a music stand while you sing or play.

If you are a developer, there are two basic pieces.

First, there’s the Push Interface Description. This bit tells you how to take control of the hardware’s various interactions.

https://github.com/Ableton/push-interface

Now, it was already possible to write to the display, but it was a bit of work. Out this week is a simple C++ code library you can bootstrap, with example code to get you up and running. It’s built in JUCE, the tool of choice for a whole lot of developers, mobile and desktop alike. (Thanks, ROLI!)

https://github.com/Ableton/push2-display-with-juce

Marc Resibois created this example, but credit to Ableton for making this public.

Here’s an example of what you can do, with Marc demonstrating on the Raspberry Pi:

This kind of openness is still very much unusual in the hardware/software industry. (Novation’s open source Launchpad Pro firmware API is another example; it takes a different angle, in that you’re actually rewriting the interactions on the device. I’ll cover that soon.)

But I think this is very much needed. Having hardware/software integration is great. Now it’s time to take the next step and make that interaction more accessible to users. Open ecosystems in music are unique in that they tend to encourage, rather than discourage sales. They increase the value of the gear we buy, and deepen the relationships makers have with users (manufacturers and independent makers alike). And these sorts of APIs also, ironically, force hardware developers to make their own iteration and revision easier.

It’s also a great step in a series of steps forward on openness and interoperability from Ableton. Whereas the company started with relatively closed hardware APIs built around proprietary manufacturer relationships, Ableton Link and the Push API and other initiatives are making it easier for Live and Push users to make these tools their own.

The post Ableton have now made it easy for any developer to work with Push 2 appeared first on CDM Create Digital Music.

by Peter Kirn at June 09, 2017 12:01 PM

June 08, 2017

Linux – CDM Create Digital Music

ROLI now make a $299, ultra-compact expressive keyboard

ROLI are filling out their mobile line of controllers, Blocks, with a two-octave keyboard – and that could change a lot. In addition to the wireless Bluetooth, battery-powered light-up X/Y pad and touch shortcuts, now you get something that looks like an instrument. The Seaboard Block is an ultra-mobile, expressive keyboard for your iOS gadget or computer, and it’s available for $299, including in Apple Stores.

If you wanted a new-fangled “expressive” keyboard – a controller on which you can move your fingers into and around the keys for extra expression – ROLI already had one strong candidate. The Seaboard RISE is a beautiful, futuristic, slim device with a familiar key layout and a price of US$799. It’ll feel a bit weird playing a piano sound on it if you’re a keyboardist, since the soft, spongy keys will be new to you. But you’ll know where the notes are, and it’ll be responsive. Then, switch to any more unusual sound – synths, physical modeled instruments, and the like – and it becomes simply magical. Finally, you have a new physical interface for your new, unheard sounds.

For me, the RISE was already a sweet spot. But I’ll be honest, I can still imagine holding back because of the price. And it doesn’t fit in my backpack, or my easyJet-friendly rollaway.

Size and price matter. So the Seaboard Block, if it feels good, could really be the winner. And even if you passed up that X/Y pad and touch controller, you might take a second look at this one. (Plus, it makes those Blocks make way more sense.)

roli-seaboard-block-and-touch-block-04-low-res

roli-seaboard-block-and-touch-block-03-low-res

We’ll get one in to test when they ship later this month. But ROLI also promise a touch and feel similar to the RISE (if not quite as deep, since the Block is slimmer). I found the previous Blocks to be responsive, but not as expressive as the RISE – so that’s good news.

What you get is a two-octave keyboard in a small-but-playable minikey form factor, USB-C for charging and MIDI out, and connectors for snap-and-play use with other Blocks.

For those of you not familiar, the Seaboard line also include what ROLI somewhat confusingly call “5D Touch.” (“Help! I’m trapped in a tesseract and wound up in a wormhole to an evil dimension and now there’s a version of me with an agonizer telling me to pledge allegiance to the Terran Empire!”)

What this means in practical terms is, you can push your fingers into the keys and make something happen, or slide them up and down the surface of the keys and make something happen, or wiggle and bend between notes, or run your finger along a continuous touch strip below the keys and get glissandi. And that turns out to be really, really useful. Also, I can’t overstate this enough – if you have even basic keyboard skills, having a piano-style layout is enormously intuitive. (By the same token, the Linnstrument seems to make sense to people used to frets.)

Add an iPhone or iPad running iOS 9 or later, and you instantly can turn this into an instrument – no wires required. The free Noise app gives you tons of sounds to start with. That means this is probably the smallest, most satisfying jam-on-the-go instrument I can imagine – something you could fit into a purse, let alone a backpack, and use in a hotel room or on a bus without so much as a wire or power connection. (With ten hours battery life, I’m fairly certain the Seaboard Block will run out of battery later than my iPhone does).

Regular CDM readers probably will want it to do more than that for three hundred bucks. So, you do get compatibility with various other tools. Ableton Live, FXpansion Strobe2, Native Instruments Kontakt and Massive, Bitwig Studio, Apple Logic Pro (including the amazing Sculpture), Garageband, SampleModeling SWAM, and the crazy-rich Spectrasonics Omnisphere all work out of the box.

roli_seaboard-block_0228-low-res

You can also develop your own tools with a rich open SDK and API. That includes some beautiful tools for Max/MSP. Not a Max owner? There’s even a free 3-month license included. (Dedicated tools for integrating the Seaboard Block are coming soon.)

The SDK actually to me makes this worth the investment – and worth the wait to see what people come up with. I’ll have a full story on the SDK soon, as I think this summer is the perfect time for it.

The Touch block, which previously seemed a bit superfluous, also now looks useful, as it gives you additional hands-on control of how the keyboard responds. That X/Y pad makes a nice combo, too. But my guess is, for most of us, you may drop those and just use the keyboard – and of course modularity allows you to do that.

ROLI aren’t without competition (somewhat amazingly, given these devices were once limited to experimental one-offs). The forthcoming JOUE, from the creator of the JazzMutant Lemur, is an inbound Kickstarter-backed product. And I have to say, it’s truly extraordinary – the touch sensitivity and precision is unmatched on the market. But there isn’t an obvious controller template or app combo to begin with, so it’s more a specialist device. The ROLI instrument works out of the box with an app, and will be in physical Apple Stores. And the ROLI has a specific, fixed playing style the JOUE doesn’t quite match. My guess is the two will be complementary, and there’s even reason for JOUE lovers to root for ROLI – because ROLI are developing the SDK, tools, instrument integration, and user base that could help other devices to succeed. (Think JOUE, Linnstrument, Madrona Labs Soundplane, not to mention the additions to the MIDI spec.)

Anyway, this is all big news – and coming on the heels of news of Ableton’s acquisition of Max/MSP, this week may prove a historical one. What was once the fringe experimentation of the academic community is making a real concerted entry into the musical mainstream. Now the only remaining question, and it’s a major one, is whether the weirdo stuff catches on. Well, you have a hand in that, too – weirdos, assemble!

https://roli.com/products/blocks/seaboard-block

The post ROLI now make a $299, ultra-compact expressive keyboard appeared first on CDM Create Digital Music.

by Peter Kirn at June 08, 2017 04:41 PM

Arturia AudioFuse: all the connections, none of the hidden settings

After a long wait, Arturia’s AudioFuse interface has arrived. And on paper, at least, it’s like audio interface wish fulfillment.

What do you want in an interface? You want really reliable, low-latency audio. You want all the connections you need. (Emphasis on what you need, because that’s tricky – not everyone needs the same thing.) And you want to be able to access the settings without having to dive through menus or load an application.

audiofuseatwork

That last one has often been a sticking point. Even when you do find an interface with the right connections and solid driver reliability and performance, a lot of the time the stuff you change every day is buried in some hard-to-access menus, or even more likely, on some application you have to load on your computer and futz around with.

And oh yeah — it’s €/$599. That’s aggressively competitive when you read the specs.

I requested one of these for review when I met with Arturia at Musikmesse in Frankfurt some weeks ago, so this isn’t a review – that’s coming. But here are some important specs.

audiofuseback

Connections

Basically, you get everything you need as a solo musician/producer – 4 outs (so you can do front/rear sound live, for instance), 4 ins, plus phono pre’s for turntables, two mic pres (not just one, as some boxes annoyingly have), and MIDI.

Plus, there’s direct monitoring, separate master / monitor mix channels (which is great for click tracks, cueing for DJs or live, and anything that requires a separate monitor mix, as well as tracking), and a lot of sync and digital options.

It’s funny, this is definitely on my must-have list, but it’s hard to find a box that does this without getting an expansive (and expensive) interface that may have more I/O than one person really needs.

This is enough for pretty much all the tracking applications one or two people recording will need, plus the monitoring options you need for various live, DJ, and studio needs, and A/B monitor switching you need in the studio. It also means as a soloist, you can eliminate a lot of gear – also important when you’re on the go.

Their full specs:

2 DiscretePRO microphone preamps
2 RIAA phono preamps
4 analog inputs
2x Mic/Instrument/Line (XLR / 1/4″ TRS)
2x Phono/Line (RCA / 1/4″ TRS)
4 analog outputs (1/4″ TRS)
2 analog inserts (1/4″ TRS)
ADAT in/out
S/PDIF in/out
Word clock in/out
MIDI in/out
24-bit next-generation A-D/D-A converters at up to 192kHz sampling rate
Talkback with dedicated built-in microphone (up to 96 kHz Sample Rate)
A/B speaker switching
Direct monitoring
2 independent headphone outputs
Separate master and monitor mix channels
USB interface with PC, Mac, iOS, Android and Linux compatibility
3-port USB hub
3 models: Classic Silver, Space Grey, Deep Black
Aluminum chassis, hard leather-covered top cover

Arturia also promise high-end audio performance, to the tune of “dual state-of-the-art mic preamps with a class-leading >131dB A-weighted EIN rating.” I’ll try to test that with some people who are better engineers than I am when we get one in.

Also cute – a 3-port USB hub. So this could really cut down the amount of gear I pack.

Now, my only real gripe is, while USB improves compatibility, I’d love a Thunderbolt 3/USB-C version of this interface, especially as that becomes the norm on Mac and PC. Maybe that will come in the future; it’s not hard to imagine Arturia making two offerings if this box is a success. USB remains the lowest common denominator, and this is not a whole lot of simultaneous I/O, so USB makes some sense. (Thunderbolt should theoretically offer stable lower latency performance by allowing smaller buffer sizes.)

audiofusetop

And dedicated controls

This is a big one. You’ll read a lot of the above on specs, but then discover that audio interfaces make you launch a clumsy app on your PC or Mac and/or dive into menus to get into settings.

That’s doubly annoying in studio use where you don’t want to break flow. How many times have you been in the middle of a session and lost time and concentration because some setting somewhere wasn’t set the way you intended, and you couldn’t see it? (“Hey, why isn’t this recording?” “Why is this level wrong?” “Why can’t I hear anything?” “Ugh, where’s the setting on this app?” … are … things you may hear if you’re near me in a studio, sometimes peppered with less-than-family-friendly bonus words.)

So Arturia have made an interface that has loads of dedicated controls. Maybe it doesn’t have a sleek, scifi minimalist aesthetic as a result, but … who cares?

Onboard dedicated controls that don’t require menu diving include: talking mic, dedicated input controls, A/B monitor switching, and a dedicated level knob for headphones.

And OS compatibility

This is the other thing – there are some great interfaces that lack support for Linux and mobile. So, for instance, if you want to rig up a custom Raspberry Pi for live use or something like that, this can double as the interface. Or you can use it with Android and iOS, which with increasingly powerful tablets starts to look viable, especially for mobile recording or stage use.

Arturia tell us performance, depending on your system, should be reliably in the territory of 4.5ms – well within what you’re likely to need, even for live (and you can still monitor direct). Some tests indicate performance as low as 3.5ms.

audiofuseincase

Plus a nice case and cover

Here’s an idea that’s obviously a long time coming. The AudioFuse not only has an adorable small form factor and aluminum chassis, but there’s a cover for it. So no more damage and scratches or even breaking off knobs when you tote this thing around – that to me is an oddly huge “why doesn’t everyone do this” moment.

The lid has a doubly useful feature – it disables the controls when it’s on, so you can avoid bumping something onstage.

Dimensions:
69*126*126 mm.

Weight:
950 g

I’m very eager to get this in my hands. Stay tuned.

For more:
https://www.arturia.com/audiofuse/details

The post Arturia AudioFuse: all the connections, none of the hidden settings appeared first on CDM Create Digital Music.

by Peter Kirn at June 08, 2017 03:13 PM

June 07, 2017

MOD Devices Blog

MOD travels around the world – Part 2

Last year, Gianfranco wrote a post about the international events MOD Devices has attended and because there’s been a lot of activity recently and a lot more to come in the near future, we’re doing a Part Deux, with all the latest events recaps and news. Enjoy!

 

Ok, so we’re a music technology startup and these are three of the greatest words you can say whenever someone asks you “- and what do YOU do?” at an event. But we’re also part of the free/libre/open source software community, which is what makes us a bit of an exotic fish in certain environments. Yet this is what gives us our edge and the ability to try to change the game and provide a creative platform that empowers its users.

In every event we go, we’re constantly pitching and demonstrating the Duo (and, as of April, its new peripherals) to everyone we meet, and it’s interesting to see that each event has its own specificity, each crowd its expectations, each musician his or her own particular needs. As we have these conversations, we get some wonderful feedback, broaden the community and make some friends in the process. It’s both exhausting and really fascinating!

 

Musikmesse 2017

Last April, we went back to Frankfurt and took part in the Musikmesse again. This time, we weren’t accompanied by the musical mastermind who thought of a world without musical instruments, but we had a great team composed of Pjotr, Jesse, Gian and myself. We were located in the electric guitar and relied on our beautiful Pedalboard Builder interface to lure the attendants to our booth. Also, Pjotr and Jesse’s trumpet and Circuit MOD jams were bound to get us some attention. At one point, they caught the eye of a French podcast crew and I ended up being interviewed for the great Les Sondiers channel (you can check it out below).

We made friends all around us but a special nod must go to luthier Jean-Luc Moscato and bass virtuoso Jeff Corallini who were right next to us. With his 7-string bass, he was always impressing everyone who walked by. Someone filmed a nice impromptu jam that happened at some point. Our own Pjotr Lasschuit got some trumpet action there as well:

With music booming everywhere, we were happy to explore some of the other (quieter) halls and check out the latest gear. I was particularly impressed by this super versatile MIDI wind instrument.

All in all, we got another great feeling of our place in this impressive and innovative industry and, like during NAMM earlier this year, we took another step forward in gathering momentum, creating some buzz and starting collaborations.

 

LAC 2017

The Linux Audio Conference has been THE community event for us since our first time there in 2013. This year, it was held in Saint Etienne, co-organized by the GRAME from Lyon and the CIEREC from Saint Etienne’s Jean Monnet University. It’s always a great opportunity to meet, chat and have a drink or two with our community’s developers, enthusiasts and supporters.

This year, we held a workshop on the “Origins, features and roadmap of the MOD Duo” and were really thrilled with the dialogue it sparked.

 

 

 

 

 

There was also a very insightful keynote speech by Paul Davis, developer of JACK and Ardour among some other great achievements. He presented his view on the state of Linux Audio, open-source development in general and he even mentioned MOD Devices as an example of an open-source-based company striving to get proper marketing promotion (indeed we are!). I was also super excited about the music tutor developed by Marc Groenewegen from the Utrecht School of Music and Technology. We talked a little bit after his session and along with Robin Gareus we imagined how we could soon have a music tutor plugin for the Duo. You can check out these (and others’) talks on the Youtube channel of Université Jean Monnet here.

The evenings were filled with musical performances and our own Jeremy Jongepier, AKA AutoStatic, closed off the second night with a MOD-fueled concert. He totally owned the stage with his Duo, guitar and MIDI controllers, all the while downing a nice cold beer: very RocknRoll! The video for that is here and starts at around 2:40:00.

 

Upcoming events

From attending these events we’ve come to realize that we’re really reconciling these two aspects – the investor-friendly and the idealistic FLOSS developer -, which isn’t always easy, but they’re actually two sides of the same coin. We’re looking to take the best from both worlds: bring some much needed investment and new business model to the FLOSS world and provide evolving and innovative devices based on FLOSS to the music market.

The next events we’ll attend are a perfect place to continue to position ourselves as a company with a different outlook and mindset on the musical effects game.

 

Sónar+D MarketLab

Next week, we will be in Barcelona for a very exciting event. It will be our second participation at the Sónar+D after being selected as a finalist for the 2015 Startup Competition. We will have a booth at the MarketLab this time, which is, as the organizers put it, “a space where the creators of the year’s most outstanding technology initiatives present the projects that they have developed in creative labs, media labs, universities and businesses. A place for trying out innovations that explore new forms of creation, production and marketing, and which in turn fosters relationships between professionals in the creative industries and the general public”. Who knows, maybe Björk will come and test the Duo out…

 

Les Ardentes Start-up Garden

In early July, we are headed to Liège, in Belgium, to be one of 30 startup at the Living Lab of the Wallifornia MusicTech that will be held during the Les Ardentes Music Festival. This will be another great opportunity to show the Duo to a broad audience, from musicians to investors. Good music and great conversations on the horizon, what more could we ask for?

 

That’s it for now, but there’ll be more on the next semester, for sure! And if any of you will be around in Spain or Belgium for our next two rendezvous, we’d love to see you, so drop us a line 🙂

by Mauricio Dwek at June 07, 2017 12:11 PM

May 30, 2017

blog4

Notstandskomitee concert video

The Notstandskomitee concert at Fraction Bruit #17, Loophole Berlin, 27.5.2017 with tracks from the new album The Golden Times:



by herrsteiner (noreply@blogger.com) at May 30, 2017 02:58 PM

May 26, 2017

blog4

May 23, 2017

blog4

new Notstandskomitee album and Berlin concert

Block4 set the release date of the new Notstandskomitee album The Golden Times album to this Friday, the 26.May 2017! It will be released exclusively on Bandcamp on the usual address https://notstandskomitee.bandcamp.com
On Saturday the 27.May 2017 Malte Steiner will play a new Notstandskomitee set with new realtime visuals at the final Fraction Bruit event in Loophole, Berlin.

The Golden Times Are About To Come


by herrsteiner (noreply@blogger.com) at May 23, 2017 04:48 PM

Body Interfaces: 10.1.1 100 Continue

the performance Body Interfaces: 10.1.1 100 Continue of my better half Tina Mariane Krogh Madsen at Sofia Underground Performance Art Festival 28.4.2017


by herrsteiner (noreply@blogger.com) at May 23, 2017 04:02 PM

May 19, 2017

Libre Music Production - Articles, Tutorials and News

Paul Davis, Ardour and JACK creator/developer, talks at Linux Audio Conference 2017

Paul Davis, Ardour and JACK creator/developer, talks at Linux Audio Conference 2017

The Linux Audio Conference 2017 is under way and this year, Ardour and JACK creator/developer, Paul Davis talked about Linux audio and his thoughts on where things currently stand with his presentation "20 years of open source audio: Success, Failure and the In-between".

by Conor at May 19, 2017 11:39 PM

LSP plugins 1.0.24 released

LSP plugins 1.0.24 released

Vladimir Sadovnikov has just released version 1.0.24 of his audio plugin suite, LSP plugins. All LSP plugins are available in LADSPA, LV2, LinuxVST and standalone JACK formats.

by Conor at May 19, 2017 05:21 PM

Ardour 5.9 is released

Ardour 5.9 is released

Ardour 5.9 has recently been released with new features, including many improvements and fixes.

This release includes -

by Conor at May 19, 2017 05:16 PM

Drumgizmo 0.9.14 is released

Drumgizmo 0.9.14 is released

The Drumgizmo team have officially announced version 0.9.14  of their drum sampling plugin.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

by Conor at May 19, 2017 05:03 PM

May 15, 2017

ardour

Ardour 5.9 released

Ardour 5.9 is now available, representing several months of development that spans some new features and many improvements and fixes.

Among other things, some significant optimizations were made to redraw performance on OS X/macOS that may be apparent if you are using Ardour on that platform. There were further improvements to tempo and MIDI related features and lots of small improvements to state serialization. Support for the Presonus Faderport 8 control surface was added (see the manual for some quite thorough documentation).

As usual, there are also dozens or hundreds of other fixes based on continuing feedback from wonderful Ardour users worldwide.

Read more below for the full list of features, improvements and fixes.

Download  

read more

by paul at May 15, 2017 03:54 PM

May 10, 2017

rncbc.org

Qtractor 0.8.2 - A Stickier Tauon release


And now for something ultimately pretty much expected: the Qstuff* pre-LAC2017 release frenzy wrap up!

Qtractor 0.8.2 (a stickier tauon) is released!

Change-log:

  • Track-name uniqueness is now being enforced, by adding an auto-incremental number suffix whenever necessary.
  • Attempt to raise an internal transient file-name registry to prevent automation/curve files to proliferate across several session load/save (re)cycles.
  • Track-height resizing now meets immediate visual feedback.
  • A brand new user preference global option is now available: View/Options.../Plugins/Editor/Select plug-in's editor (GUI) if more than one is available.
  • More gradient eye-candy on main track-view and piano-roll canvases, now showing left and right edge fake-shadows.
  • Fixed the time entry spin-boxes when changing time offset or length fields in BBT time format that goes across any tempo/time-signature change nodes.
  • French (fr) translation update (by Olivier Humbert, thanks).

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

http://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Have fun, always.

Flattr this

by rncbc at May 10, 2017 07:00 PM

May 06, 2017

Libre Music Production - Articles, Tutorials and News

ZARAZA releases new album entirely recorded with Libre Music tools

ZARAZA releases new album entirely recorded with Libre Music tools

Ecuadorian / Canadian experimental veterans ZARAZA have just released their 3rd album Spasms of Rebirth.

It was entirely recorded using Libre Music tools:

  • Fedora 25
  • Ardour (all mixing)
  • Guitarix (all guitars and bass)
  • Calf plugins (for mixing in Ardour)
  • Audacity (mastering)

by admin at May 06, 2017 09:56 PM

DrumGizmo version 0.9.13 now available

DrumGizmo version 0.9.13 now available

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in midi and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

Included in this release is:

by admin at May 06, 2017 09:52 PM

New Drumgizmo version released with major new feature, diskstreaming!

New Drumgizmo version released with major new feature, diskstreaming!

Version 0.9.13 of drum sampling plugin, Drumgizmo has recently been released with the much anticipated diskstreaming feature.

DrumGizmo is an open source, multichannel, multilayered, cross-platform drum plugin and stand-alone application. It enables you to compose drums in MIDI and mix them with a multichannel approach. It is comparable to that of mixing a real drumkit that has been recorded with a multimic setup.

by Conor at May 06, 2017 09:38 PM

MOD Devices Blog

Tutorial: Arduino & Control Chain

Hi there once again fellow MOD-monsters! As some of you might know, we are currently in the beta testing phase for our new Control Chain footswitch extension. At the same time, we have also released the brand new Arduino Control Chain shield, allowing you to build your own awesome controllers.

If you’re thinking, hey Jesse, what is all that Control Chain talk about?

*Control Chain is an open standard, including hardware, communication protocol, cables and connectors, developed to connect external controllers to the MOD. For example, footswitch extensions, expression pedals and so on.
Comparing to MIDI, Control Chain is way more powerful. For example, instead of using hard-coded values as MIDI does, Control Chain has what is called device descriptor and its assignment (or mapping) message contains the full information about the parameter being assigned, such as parameter name, absolute value, range and any other data. Having all that information on the device side allows developers to create powerful peripherals that can, for example, show the absolute parameter value on a display, use different LED colors to indicate a specific state, etc. Pretty neat, right?

Until now, you could find two examples, for a simple momentary button and potentiometer, on our GitHub page, but today we will add a new example: we will build a Control Chain device with expression pedal inputs.

What do I need?

  1. One Arduino Uno or Due
  2. One Arduino Control Chain shield
  3. One stereo (TRS) jack for every expression pedal input that you want (Max: 4 (Uno), 8(Due))
  4. A soldering iron, some wire and some soldering tin
  5. (Optional) Something to put your final build in

The schematic

Because the Arduino has very high impedance analog inputs, there is no need for any current limiting resistor. We can simply hook up the TRS jacks as follows: (Tip to 5V, ring to signal and sleeve to ground)*

(*) not all expression pedals are made equal, some manufacturers use a different mapping than the one described above.
Another common mapping is: Tip to signal, ring to 5V, sleeve to ground. (For example on the Roland EV-5)

The code

The Arduino code is quite simple, it reads the ADC values using the analogRead() function, and stores it into a variable. The Control Chain library takes care of the rest.

The code is written in such a way that you can change the define at the top of the code to the amount of ports that you want, and not have to rewrite any code. Do you want 3 expression pedal ports?

#define amountOfPorts 3

The maximum amount of ports for an Arduino Uno is 4. The Arduino Due can provide a maximum of 8 ports.

The build

  1. Solder wires to your TRS jack inputs
  2. Twist the wires together
  3. Solder the sleeves to the ground strip on the CC shield
  4. Solder the tips to the 5v strip on the CC shield
  5. Solder the rings to the corresponding analog inputs on the CC shield

Attach the CC shield to the Arduino, now your device should look a little like this:

  1. Follow the instructions on our Github Page and install the dependencies
  2. Change the define in the code to the amount of ports connected
  3. Upload the code to your Arduino
  4. Time for a test drive!
    1. Connect the MOD Duo to the “main” Control Chain port on your new device

    2. Connect your expression pedals and try them out with your MOD Duo!
  5. (Optional) Create an enclosure for (semi-)permanent installation, I used an old smartphone-box that I had laying around somewhere 🙂

The end result

You just built your own Control Chain device, and we hope with many more to come. We are looking forward to seeing what all you wonderful people come up with! Don’t hesitate to come and talk to us on the forums if you have any questions about Control Chain devices, the Arduino shield or our favourite musicians.

Talk to you later!

P.S. Vulfpeck is great

by Jesse Verhage at May 06, 2017 09:16 PM

GStreamer News

GStreamer 1.12.0 stable release (binaries)

Pre-built binary images of the 1.12.0 stable release of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

May 06, 2017 12:30 PM

May 04, 2017

GStreamer News

GStreamer 1.12.0 stable release

The GStreamer team is pleased to announce the first release in the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes can be found here.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

May 04, 2017 04:00 PM

May 02, 2017

digital audio hacks – Hackaday

Robotic Glockenspiel and Hacked HDD’s Make Music

[bd594] likes to make strange objects. This time it’s a robotic glockenspiel and hacked HDD‘s. [bd594] is no stranger to Hackaday either, as we have featured many of his past projects before including the useless candle or recreating the song Funky town from Old Junk.

His latest project is quite exciting. He has incorporated his robotic glockenspiel with a hacked hard drive rhythm section to play audio controlled via a PIC 16F84A microcontroller. The song choice is Axel-F. If you had a cell phone around the early 2000’s you were almost guaranteed to have used this song as a ringtone at some point or another. This is where music is headed these days anyway; the sooner we can replace the likes of Justin Bieber with a robot the better. Or maybe we already have?

 


Filed under: digital audio hacks, robots hacks

by Jack Laidlaw at May 02, 2017 08:00 PM

rncbc.org

Vee One Suite 0.8.2 - The Pre-LAC2017 Release frenzy continues...


The Qstuff* pre-LAC2017 release frenzy continues...

The Vee One Suite of old-school software instruments, respectively synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer and drumkv1 as yet another drum-kit sampler, are all joining the traditional pre-LAC release frenzy!

All available in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

The common change-log for this second batch release goes as follows:

  • A custom knob/spin-box behavioral option have been added: Configure/Knob edit mode, as to avoid abrupt changes upon editing values (still the default behavior) and only take effect (Deferred) when enter is pressed or the spin-box loses focus.
  • The main GUI has been partially revamped, after replacing some rotary knob/dial combos with kinda more skeuomorphic fake-LED radio-buttons or check-boxes.
  • A MIDI In(put) status fake-LED is now featured on the bottom-left status bar, adding up to eye-candy as usual (applies to all); also, each drum element key/sample now have their own fake-LED flashing on respective MIDI note-on/off events (applies to drumkv1 only).
  • Alias-free/band-limited wavetable oscillators have been fixed to prevent cross-octave, polyphonic interference. (applies to synthv1 only).
  • A brand new and specific user preference option is now available as Help/Configure.../Options/Use GM standard drum names (default being yes/true/on; applies to drumkv1 only).

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

So here they are again!

synthv1 - an old-school polyphonic synthesizer

synthv1 0.8.2 (pre-lac2017) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.

LV2 URI: http://synthv1.sourceforge.net/lv2

website:
http://synthv1.sourceforge.net

downloads:
http://sourceforge.net/projects/synthv1/files

git repos:
http://git.code.sf.net/p/synthv1/code
https://github.com/rncbc/synthv1.git
https://gitlab.com/rncbc/synthv1.git
https://bitbucket.org/rncbc/synthv1.git

Flattr this

samplv1 - an old-school polyphonic sampler

samplv1 0.8.2 (e-lac2017) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.

LV2 URI: http://samplv1.sourceforge.net/lv2

website:
http://samplv1.sourceforge.net

downloads:
http://sourceforge.net/projects/samplv1/files

git repos:
http://git.code.sf.net/p/samplv1/code
https://github.com/rncbc/samplv1.git
https://gitlab.com/rncbc/samplv1.git
https://bitbucket.org/rncbc/samplv1.git

Flattr this

drumkv1 - an old-school drum-kit sampler

drumkv1 0.8.2 (pre-lac2017) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.

LV2 URI: http://drumkv1.sourceforge.net/lv2

website:
http://drumkv1.sourceforge.net

downloads:
http://sourceforge.net/projects/drumkv1/files

git repos:
http://git.code.sf.net/p/drumkv1/code
https://github.com/rncbc/drumkv1.git
https://gitlab.com/rncbc/drumkv1.git
https://bitbucket.org/rncbc/drumkv1.git

Flattr this

Enjoy && share the fun ;)

by rncbc at May 02, 2017 07:00 PM

May 01, 2017

GStreamer News

GStreamer 1.12.0 release candidate 2 (1.11.91, binaries)

Pre-built binary images of the 1.12.0 release candidate 2 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

May 01, 2017 04:00 PM

April 30, 2017

m3ga blog

What do you mean ExceptT doesn't Compose?

Disclaimer: I work at Ambiata (our Github presence) probably the biggest Haskell shop in the southern hemisphere. Although I mention some of Ambiata's coding practices, in this blog post I am speaking for myself and not for Ambiata. However, the way I'm using ExceptT and handling exceptions in this post is something I learned from my colleagues at Ambiata.

At work, I've been spending some time tracking down exceptions in some of our Haskell code that have been bubbling up to the top level an killing a complex multi-threaded program. On Friday I posted a somewhat flippant comment to Google Plus:

Using exceptions for control flow is the root of many evils in software.

Lennart Kolmodin who I remember from my very earliest days of using Haskell in 2008 and who I met for the first time at ICFP in Copenhagen in 2011 responded:

Yet what to do if you want composable code? Currently I have
type Rpc a = ExceptT RpcError IO a
which is terrible

But what do we mean by "composable"? I like the wikipedia definition:

Composability is a system design principle that deals with the inter-relationships of components. A highly composable system provides recombinant components that can be selected and assembled in various combinations to satisfy specific user requirements.

The ensuing discussion, which also included Sean Leather, suggested that these two experienced Haskellers were not aware that with the help of some combinator functions, ExceptT composes very nicely and results in more readable and more reliable code.

At Ambiata, our coding guidelines strongly discourage the use of partial functions. Since the type signature of a function doesn't include information about the exceptions it might throw, the use of exceptions is strongly discouraged. When using library functions that may throw exceptions, we try to catch those exceptions as close as possible to their source and turn them into errors that are explicit in the type signatures of the code we write. Finally, we avoid using String to hold errors. Instead we construct data types to carry error messages and render functions to convert them to Text.

In order to properly demonstrate the ideas, I've written some demo code and made it available in this GitHub repo. It compiles and even runs (providing you give it the required number of command line arguments) and hopefully does a good job demonstrating how the bits fit together.

So lets look at the naive version of a program that doesn't do any exception handling at all.


  import Data.ByteString.Char8 (readFile, writeFile)

  import Naive.Cat (Cat, parseCat)
  import Naive.Db (Result, processWithDb, renderResult, withDatabaseConnection)
  import Naive.Dog (Dog, parseDog)

  import Prelude hiding (readFile, writeFile)

  import System.Environment (getArgs)
  import System.Exit (exitFailure)

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] -> processFiles inFile1 infile2 outFile
      _ -> putStrLn "Expected three file names." >> exitFailure

  readCatFile :: FilePath -> IO Cat
  readCatFile fpath = do
    putStrLn "Reading Cat file."
    parseCat <$> readFile fpath

  readDogFile :: FilePath -> IO Dog
  readDogFile fpath = do
    putStrLn "Reading Dog file."
    parseDog <$> readFile fpath

  writeResultFile :: FilePath -> Result -> IO ()
  writeResultFile fpath result = do
    putStrLn "Writing Result file."
    writeFile fpath $ renderResult result

  processFiles :: FilePath -> FilePath -> FilePath -> IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

Once built as per the instructions in the repo, it can be run with:


  dist/build/improved/improved Naive/Cat.hs Naive/Dog.hs /dev/null
  Reading Cat file 'Naive/Cat.hs'
  Reading Dog file 'Naive/Dog.hs'.
  Writing Result file '/dev/null'.

The above code is pretty naive and there is zero indication of what can and cannot fail or how it can fail. Here's a list of some of the obvious failures that may result in an exception being thrown:

  • Either of the two readFile calls.
  • The writeFile call.
  • The parsing functions parseCat and parseDog.
  • Opening the database connection.
  • The database connection could terminate during the processing stage.

So lets see how the use of the standard Either type, ExceptT from the transformers package and combinators from Gabriel Gonzales' errors package can improve things.

Firstly the types of parseCat and parseDog were ridiculous. Parsers can fail with parse errors, so these should both return an Either type. Just about everything else should be in the ExceptT e IO monad. Lets see what that looks like:


  {-# LANGUAGE OverloadedStrings #-}
  import           Control.Exception (SomeException)
  import           Control.Monad.IO.Class (liftIO)
  import           Control.Error (ExceptT, fmapL, fmapLT, handleExceptT
                                 , hoistEither, runExceptT)

  import           Data.ByteString.Char8 (readFile, writeFile)
  import           Data.Monoid ((<>))
  import           Data.Text (Text)
  import qualified Data.Text as T
  import qualified Data.Text.IO as T

  import           Improved.Cat (Cat, CatParseError, parseCat, renderCatParseError)
  import           Improved.Db (DbError, Result, processWithDb, renderDbError
                               , renderResult, withDatabaseConnection)
  import           Improved.Dog (Dog, DogParseError, parseDog, renderDogParseError)

  import           Prelude hiding (readFile, writeFile)

  import           System.Environment (getArgs)
  import           System.Exit (exitFailure)

  data ProcessError
    = ECat CatParseError
    | EDog DogParseError
    | EReadFile FilePath Text
    | EWriteFile FilePath Text
    | EDb DbError

  main :: IO ()
  main = do
    args <- getArgs
    case args of
      [inFile1, infile2, outFile] ->
              report =<< runExceptT (processFiles inFile1 infile2 outFile)
      _ -> do
          putStrLn "Expected three file names, the first two are input, the last output."
          exitFailure

  report :: Either ProcessError () -> IO ()
  report (Right _) = pure ()
  report (Left e) = T.putStrLn $ renderProcessError e


  renderProcessError :: ProcessError -> Text
  renderProcessError pe =
    case pe of
      ECat ec -> renderCatParseError ec
      EDog ed -> renderDogParseError ed
      EReadFile fpath msg -> "Error reading '" <> T.pack fpath <> "' : " <> msg
      EWriteFile fpath msg -> "Error writing '" <> T.pack fpath <> "' : " <> msg
      EDb dbe -> renderDbError dbe


  readCatFile :: FilePath -> ExceptT ProcessError IO Cat
  readCatFile fpath = do
    liftIO $ putStrLn "Reading Cat file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL ECat $ parseCat bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  readDogFile :: FilePath -> ExceptT ProcessError IO Dog
  readDogFile fpath = do
    liftIO $ putStrLn "Reading Dog file."
    bs <- handleExceptT handler $ readFile fpath
    hoistEither . fmapL EDog $ parseDog bs
    where
      handler :: SomeException -> ProcessError
      handler e = EReadFile fpath (T.pack $ show e)

  writeResultFile :: FilePath -> Result -> ExceptT ProcessError IO ()
  writeResultFile fpath result = do
    liftIO $ putStrLn "Writing Result file."
    handleExceptT handler . writeFile fpath $ renderResult result
    where
      handler :: SomeException -> ProcessError
      handler e = EWriteFile fpath (T.pack $ show e)

  processFiles :: FilePath -> FilePath -> FilePath -> ExceptT ProcessError IO ()
  processFiles infile1 infile2 outfile = do
    cat <- readCatFile infile1
    dog <- readDogFile infile2
    result <- fmapLT EDb . withDatabaseConnection $ \ db ->
                 processWithDb db cat dog
    writeResultFile outfile result

The first thing to notice is that changes to the structure of the main processing function processFiles are minor but all errors are now handled explicitly. In addition, all possible exceptions are caught as close as possible to the source and turned into errors that are explicit in the function return types. Sceptical? Try replacing one of the readFile calls with an error call or a throw and see it get caught and turned into an error as specified by the type of the function.

We also see that despite having many different error types (which happens when code is split up into many packages and modules), a constructor for an error type higher in the stack can encapsulate error types lower in the stack. For example, this value of type ProcessError:


  EDb (DbError3 ResultError1)

contains a DbError which in turn contains a ResultError. Nesting error types like this aids composition, as does the separation of error rendering (turning an error data type into text to be printed) from printing.

We also see that with the use of combinators like fmapLT, and the nested error types of the previous paragraph, means that ExceptT monad transformers do compose.

Using ExceptT with the combinators from the errors package to catch exceptions as close as possible to their source and converting them to errors has numerous benefits including:

  • Errors are explicit in the types of the functions, making the code easier to reason about.
  • Its easier to provide better error messages and more context than what is normally provided by the Show instance of most exceptions.
  • The programmer spends less time chasing the source of exceptions in large complex code bases.
  • More robust code, because the programmer is forced to think about and write code to handle errors instead of error handling being and optional afterthought.

Want to discuss this? Try reddit.

April 30, 2017 02:22 AM

April 27, 2017

rncbc.org

The QStuff* Pre-LAC2017 Release frenzy started...

Greetings!

The Qstuff* pre-LAC2017 release frenzy is getting started...

Enjoy the first batch, more to come and have fun!

 

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.4.5 (pre-lac2017) is now released!

QjackCtl is a(n ageing but still) simple Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:
http://qjackctl.sourceforge.net
Project page:
http://sourceforge.net/projects/qjackctl
Downloads:
http://sourceforge.net/projects/qjackctl/files

Git repos:

http://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl.git
https://gitlab.com/rncbc/qjackctl.git
https://bitbucket.com/rncbc/qjackctl.git

Change-log:

  • On some desktop-shells, the system tray icon blinking on XRUN ocurrences, have been found responsible to excessive CPU usage, an "eye-candy" effect which is now optional as far as Setup/Display/Blink server mode indicator goes.
  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.4.4 (pre-lac2017) is now released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsynth.sourceforge.net
Project page:
http://sourceforge.net/projects/qsynth
Downloads:
http://sourceforge.net/projects/qsynth/files

Git repos:

http://git.code.sf.net/p/qsynth/code
https://github.com/rncbc/qsynth.git
https://gitlab.com/rncbc/qsynth.git
https://bitbucket.com/rncbc/qsynth.git

Change-log:

  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.4.3 (pre-lac2017) is now released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.

Website:
http://qsampler.sourceforge.net
Project page:
http://sourceforge.net/projects/qsampler
Downloads:
http://sourceforge.net/projects/qsampler/files

Git repos:

http://git.code.sf.net/p/qsampler/code
https://github.com/rncbc/qsampler.git
https://gitlab.com/rncbc/qsampler.git
https://bitbucket.com/rncbc/qsampler.git

Change-log:

  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

QXGEdit - A Qt XG Editor

QXGEdit 0.4.3 (pre-lac2017) is now released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.

Website:
http://qxgedit.sourceforge.net
Project page:
http://sourceforge.net/projects/qxgedit
Downloads:
http://sourceforge.net/projects/qxgedit/files

Git repos:

http://git.code.sf.net/p/qxgedit/code
https://github.com/rncbc/qxgedit.git
https://gitlab.com/rncbc/qxgedit.git
https://bitbucket.com/rncbc/qxgedit.git

Change-log:

  • Added French man page (by Olivier Humbert, thanks).
  • Added one decimal digit to the randomize percentage input spin-boxes on the General Options dialog.
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

QmidiCtl - A MIDI Remote Controller via UDP/IP Multicast

QmidiCtl 0.4.3 (pre-lac2017) is now released!

QmidiCtl is a MIDI remote controller application that sends MIDI data over the network, using UDP/IP multicast. Inspired by multimidicast (http://llg.cubic.org/tools) and designed to be compatible with ipMIDI for Windows (http://nerds.de). QmidiCtl has been primarily designed for the Maemo enabled handheld devices, namely the Nokia N900 and also being promoted to the Maemo Package repositories. Nevertheless, QmidiCtl may still be found effective as a regular desktop application as well.

Website:
http://qmidictl.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidictl
Downloads:
http://sourceforge.net/projects/qmidictl/files

Git repos:

http://git.code.sf.net/p/qmidictl/code
https://github.com/rncbc/qmidictl.git
https://gitlab.com/rncbc/qmidictl.git
https://bitbucket.com/rncbc/qmidictl.git

Change-log:

  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.4.3 (pre-lac2017) is now released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.

Website:
http://qmidinet.sourceforge.net
Project page:
http://sourceforge.net/projects/qmidinet
Downloads:
http://sourceforge.net/projects/qmidinet/files

Git repos:

http://git.code.sf.net/p/qmidinet/code
https://github.com/rncbc/qmidinet.git
https://gitlab.com/rncbc/qmidinet.git
https://bitbucket.com/rncbc/qmidinet.git

Change-log:

  • Added French man page (by Olivier Humbert, thanks).
  • Make builds reproducible byte for byte, by getting rid of the configure build date and time stamps.

Flattr this

 

License:

All of the Qstuff* are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

 

Enjoy && keep the fun!

by rncbc at April 27, 2017 07:00 PM

GStreamer News

GStreamer 1.12.0 release candidate 2 (1.11.91)

The GStreamer team is pleased to announce the second release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes. A initial, unfinished version of the release notes can be found here already.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 27, 2017 03:00 PM

April 26, 2017

open-source – CDM Create Digital Music

ArduTouch is an all-in-one Arduino synthesizer learning kit for $30

This looks like a near-perfect platform for learning synthesis with Arduino – and it’s just US$30 (with an even-lower $25 target price).

It’s called ArduTouch, a new Arduino-compatible music synth kit. It’s fully open source – everything you need to put this together is available on GitHub. And it’s the work of Mitch Altman, something of a celebrity in DIY/maker circles.

Mitch is the clever inventor of the TV B-Gone – an IR blaster that lets you terminate TV power in places like airport lounges – plus brainwave-tickling gear like the Neurodreamer and Trip Glasses. (See his Cornfield Electronics manufacturer.) Indeed, some ten years ago when CDM hosted its first MusicMakers / Handmade Music event in New York, Mitch happened to be in town and put us all in a pleasant, totally drug-free trance state with his glasses. He’s also a music fan, though, so it’s great to see him get back into music synthesis.

And ArduTouch is hugely clever. It’s an Arduino clone, but instead of just some headers and pins for connecting wires (boring), it also adds a PCB touch keyboard for playing notes, some extra buttons and pots so you can control sounds, and an all-important amp and speaker, so you can hear the results on just the board. (You’ll obviously want to plug into extra gear for more power and loudness.)

You don’t have to code. Just put this together, and you can start making music.

That’s already pretty cool, but the real magic comes in the form of two additional ingredients:

Software. ArduTouch is a new library that enables the synthesis capabilities of the board. This means you can also customize synth functionality (like adding additional control or modifying the sound), or create your own synths.

Tutorials. When you want to go deeper, the other side of this is a set of documentation to teach you the basics of DSP (digital signal processing) using the board and library.

In other words, what you’ve got is an all-hardware course on DSP coding, on a $30 board. And that’s just fabulous. I’ve always thought working on a low-level with hardware is a great way to get into the basics, especially for those with no previous coding background.

Looks like I’ve got a summer project. Stay tuned. And thanks, Mitch!

This obviously needs videos and sound samples and the like so — guess we should get on that!

ardutouch

https://github.com/maltman23/ArduTouch

In the meantime, though, here’s Mitch with some great inspiration on what hacking and making is about. Mitch is uncommonly good at teaching and explaining and generally being a leader for all kinds of people worldwide. Have a look:

He also walks people through the hackerspace movement and where it came from – especially meaningful to us, as the hacklabs and knowledge transfer projects we host are rooted directly in this legacy (including via Mitch’s own contributions). This talk is really must-watch, as it’s one of the best explanations I’ve seen on what this is about and how to make it work:

Don’t know how to solder? Mitch has you covered:

And for a how-to that’s equally important, Mitch talks about how to do what you love:

The post ArduTouch is an all-in-one Arduino synthesizer learning kit for $30 appeared first on CDM Create Digital Music.

by Peter Kirn at April 26, 2017 11:27 AM

April 24, 2017

Audio – Stefan Westerfeld's blog

24.04.2017 spectmorph-0.3.2 and www.spectmorph.org updates

Finally after taking the time to integrate improvements, spectmorph-0.3.2 was released. The main feature is certainly the new unison effect. By adding up multiple detuned copies of the same sound, it produces the illusion of multiple instruments playing the same notes. Of course this is just a cheap approximation of what it would sound like if you really recorded multiple real instruments playing the same notes, but it at least makes the sound “seem” more fat than without the effect.

At the same time, the website was redesigned and improved. Besides the new look and feel, there is now also a piece of music called “Morphing Motion” which was made with the newest version of the SpectMorph VST plugin.

Visit www.spectmorph.org to get the new version or listen to the audio demos.

by stw at April 24, 2017 04:33 PM

April 20, 2017

Libre Music Production - Articles, Tutorials and News

Unfa : Helm's deep

Unfa : Helm's deep

In this extensive video, Unfa uses Helm, the new light gun of the Linux Audio Synthesis arsenal, to compose a full drums+bass+melody track on the fly!

 

by yassinphilip at April 20, 2017 08:38 AM

April 18, 2017

Scores of Beauty

OOoLilyPond: Creating musical snippets in LibreOffice documents

Combining text and music

If you want to create a document with lots of text and some small musical snippets, e.g. an exercise sheet or a musical analysis, what software can you use?

Of course it’s possible to do the entire project in LilyPond or another notation program, inserting passages of text between multiple scores – in LilyPond by combining \markup and \score sections:

\markup { "A first motif:" }
\score \relative c' { c4 d e f  g2 g }
\markup { "A second motif:" }
\score \relative c'' { a4 a a a  g1 }

OLy01

However, it is clear that notation programs are not originally designed for that task, so many people prefer WYSIWYG word processors like LibreOffice Writer or Microsoft Word that instantly show what the final document will look like. In these text documents music fragments can be inserted as image files that can for example be generated with LilyPond from .ly input files. Of course these images are then static, and to be able to modify the music examples one has to manage the additional files with some care. That’s when things might get a little more complicated…

Wouldn’t it be a killer feature to be able to edit the scores directly from within the word processor document, without having to keep track of and worry about additional files? Well, you may be surprised to learn that this has already been possible for quite some time, and I take the relaunch of OOoLilyPond as an opportunity to show it to you.

What is OOoLilyPond?

OOoLilyPond (OLy) is a LibreOffice extension that allows to insert snippets of LilyPond code into LibreOffice Writer, Draw and Impress documents and to transparently handle the rendering through LilyPond. So you can write LilyPond code, have that rendered as a score and be able to modify it again.

OOoLilyPond was originally written by Samuel Hartmann and had its first launch in 2006 (hence the name, as the only open-source successor of StarOffice was OpenOffice.org).
Samuel continued the development until 2009 when a stable version 0.4.0 with new features was released. In the following years, OLy was ocasionally mentioned in LilyPond’s user forums, so there might be several people who use it periodically – including myself. Being a music teacher, I work with it everyday. Well, almost…

In 2014 LilyPond had the new 2.19 release which showed a different behaviour when invoked by the command used in OLy. This lead to a somewhat mysterious error message, and the macro execution was aborted. Therefore it was impossible to use OLy with LilyPond’s new development versions. Of course, I googled that problem, but there was no answer.

Someday I wanted to get to the bottom of it. I’m one of those guys who have to unscrew anything they get their hands on. OLy is open source and published under GPL, so why hesitate? After some studying the code, I finally found that the problem was surprisingly small and easy to fix. I posted my solution on the LilyPond mailing list and also began to experiment with new features.

Urs Liska and Joram Berger had already contacted Samuel in the past. They knew that he did not have the time to further work on OOoLilyPond, but he would be glad if someone else could take over the development of the project.

Urs and Joram also contributed lots of work, knowledge and ideas, so that we were finally able to publish a new release that can be adapted to the slightly different characteristics of LibreOffice and OpenOffice, that can be translated into other languages, that can make use of vector graphics etc. This new take on the project now has its home within openLilyLib.

How to get and install it

The newest release will always be found at github.com/openlilylib/LO-ly/releases where the project is maintained. Look for an *.oxt file with a name similar to OOoLilyPond-0.X.X.oxt and download it:

OLy-Downloads-01

For anyone who doesn’t want to read the release notes, there’s a simple Download page as well.

In LibreOffice, open the extension manager (Tools -> Extension Manager), click the “Add” button which will open a file dialog. Select the *.oxt file you’ve just downloaded and confirm with the “Open” button.

When asked for whom you want to install the extension, you can choose “only for me” which won’t require administrator privileges on your system. After successful installing, close the extension manager, and probably you will be requested to restart LibreOffice.

Now LibreOffice will have a new “OLy” toolbar. It contains a single “OLy” button that launches the extension.

OLy-Toolbar-01

Launching for the first time

Here we go: Create a new Writer document and click the OLy button. (Don’t worry if you get some error messages telling you that LilyPond could not be executed. Just click “OK” to close the message boxes. We’ll fix that in a moment.)

Now you should see the OOoLilyPond Editor window.

First, let’s open the configuration dialog by clicking the “Config” button at the bottom:

OLy-Editor-Window-01A new window will pop up:

OLy-Config-Window-01Of course, you need to have LilyPond installed on your system. In the “LilyPond Executable” field, you need to specify the executable file for LilyPond. On startup, OLy has tried to guess its correct (default) location. If that didn’t work, you already got some error messages.  😉

For a Windows system, you need to know the program folder (probably C:\Program Files (x86)\LilyPond on 64-bit Windows or C:\Program Files\LilyPond on 32-bit Windows systems).
In the subfolder \usr\bin\ you will find the executable file lilypond.exe.

If you are working with Linux, relax and smile. Usually, you simply need to specify lilypond as command, without any path settings. As far as I know, that also applies for the Mac OS family which is based on Unix as well.

On the left side, there are two frames titled “Insert Images”. Depending on the Office version you are using (OpenOffice or LibreOffice), you should click the appropriate options.

For the moment, all remaining settings can be left at their default values. In case you’ve messed up anything, there’s also a “Reset to Default” button.

At the right bottom, click “OK” to apply the settings and close the dialog. Now you are back in the main Editor window. It contains some sample code, so just click the “LilyPond” button at the bottom right.

In the background, LilyPond is now translating the code into a *.png graphic file which will be inserted into Writer. The code itself is invisibly saved inside the document.

After a few seconds, the editor window should disappear, and a newly created image should show up.

How to work with it

If you want to modify an existing OLy object, click on it to select it in Writer. Then, hit the “OLy” button.

The Editor window will show the code as it has been entered before. Here you can modify it, e.g. change some pitches (there’s also no need to keep the comments) and click the “LilyPond” button again. OLy will generate a new image and replace the old one.

To insert a new OLy object, just make sure that no existing object is selected when hitting the “OLy” button.

Templates

In the Editor window, you might have noticed that you were not presented an entire LilyPond file, but only an excerpt of it. This is because OLy always works with a template. It allows you to quickly enter short snippets while not having to care about any other settings for layout etc.

The snippet you just created is based on the template Default.ly which looks (more or less) like this:

\transpose %{OOoLilyPondCustom1%}c c'%{OOoLilyPondEnd%}
{
  %{OOoLilyPondCode%}
  \key e \major 
  e8 fis gis e fis8 b,4. | 
  e2\fermata \bar "|."
  %{OOoLilyPondEnd%}
}

\include "lilypond-book-preamble.ly"
#(set-global-staff-size %{OOoLilyPondStaffSize%}20%{OOoLilyPondEnd%})

\paper {
  #(define dump-extents #t)
  ragged-right = ##t
  line-width = %{OOoLilyPondLineWidth%}17\cm%{OOoLilyPondEnd%}
}

\layout {
  indent = #0
  \context {
    \Score
    \remove "Bar_number_engraver"
  }
}

In the Editor window, there are five text fields: the big “Code” area on top, and four additional small fields named “Line Width”, “Staff Size”, “Custom 1” and “Custom 2”. They contain the template parts that are enclosed by tags, i.e. preceeded by %{OOoLilyPondCode%}, %{OOoLilyPondLineWidth%}, %{OOoLilyPondStaffSize%}, %{OOoLilyPondCustom1%} and %{OOoLilyPondCustom2%} respectively, each terminated by %{OOoLilyPondEnd%}. (Those tags themselves are ignored by LilyPond because they are comments.)

All remaining parts of the template stay “invisible” to the user and cannot be changed. Don’t worry, you can modify existing templates and create your own.

A template must at least have a Code section, other sections are optional. There is a template Direct to LilyPond which only consists of a Code section and contains no “invisible” parts at all. You can use it to paste ordinary *.ly files into your document. But please keep in mind that the resulting graphic should be smaller than your paper size.

Most templates (the ones without [SVG] inside the file name) make use of \include "lilypond-book-preamble.ly which results in a cropped image. Any whitespace around the music is automatically removed.

Below the code view, there is a dropdown field that lets you choose which template to use. Of course, different templates have different default code in their Code sections.

When switching the template, the code field always will update to the corresponding default code as long as you haven’t made any edits yet. However, this will not happen automatically if you already made any changes. To have your current code replaced anyway, click the “Default Code” checkbox.

The “Edit” button will open a new dialog where you can edit the current template. Optionally, you can save it under a new file name.

Easier editing

Probably you are used to a particular text editor when working on LilyPond files. Of course you can use it for OLy templates as well. The path to the template files can be found (and changed) in the configuration dialog. Here you can also specify where your text editor’s executable file is located. You can use any text editor like Mousepad, Notepad etc., but if you don’t yet know Frescobaldi, you really should give it a try.

Back in the main OLy window, another button might be useful: “Open as temp. file in Ext. Editor”. It saves the entire snippet into a *.ly file – not only the contents of the “Code” field, but including the other fields and the “invisible” parts between them. This file is opened in the external editor you’ve specified before. If you use an IDE like Frescobaldi, you can instantly preview your changes.

As soon as editing is finished, save your changes (without changing the file name). You can now close your external editor.

Back in OLy, hit the “Import from temp. file” button to load the updated file back into OLy. In the text fields you will recognize the changes you have applied. Hit the “LilyPond” button to insert the graphic into your document.

A word of caution: Only changes to the Code, Line Width, Staff Size, Custom 1 and Custom 2 fields are recognized. Changes to the “invisible” parts of the template are ignored! If you intend to modify those sections as well, you need to create a new template.

A very last word of caution: If you use a template that is modified or created by yourself, and you share your Office document with other collaborators, you have to share your template as well.

To be continued…

OLy can be configured for using vector graphic formats (*.svg or *.eps) instead of *.png. They offer better quality, especially for printing. However, some additional things will have to be considered. This will soon be covered in a follow-up post.

 

by Klaus Blum at April 18, 2017 04:10 PM

April 13, 2017

News – Ubuntu Studio

Ubuntu Studio 17.04 Released

We are happy to announce the release of our latest version, Ubuntu Studio 17.04 Zesty Zapus! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]

by eylul at April 13, 2017 10:31 PM

April 10, 2017

GStreamer News

GStreamer 1.12.0 release candidate 1 (1.11.90, binaries)

Pre-built binary images of the 1.12.0 release candidate 1 of GStreamer are now available for Windows 32/64-bit, iOS and Mac OS X and Android.

The builds are available for download from: Android, iOS, Mac OS X and Windows.

April 10, 2017 02:00 PM

aubio

0.4.5 released

A new version of aubio, 0.4.5, is available.

This version features:

  • a new aubio python command line tool to extract information from sound files
  • improved default parameters for onset detection, using adaptive spectral whitening and compression
  • support for libswresample

New options --miditap-note and --miditap-velo have been added to aubioonset and aubiotrack to adjust the note and velocity of the midi note emitted by onsets and beats.

0.4.5 also comes with a bunch of fixes, including improved documentation, build system fixes, and platform compatibility.

Many thanks to Martin Hermant (@MartinHM), Sebastian Böck (@superbock), Travis Seaver (@tseaver) and others for their help and contributions.

read more after the break...

April 10, 2017 01:02 PM

digital audio hacks – Hackaday

Custom Media Center Maintains Look of 70s Audio Components

Slotting a modern media center into an old stereo usually means adding Bluetooth and a Raspberry Pi to an amp or receiver, and maybe adding a few discrete connectors on the back panel. But this media center for a late-70s Braun hi-fi (translated) goes many steps beyond that — it fabricates a component that never existed.

The article is in German, and the Google translation is a little spotty, but it’s pretty clear what [Sebastian Schwarzmeier] is going for here. The Braun Studio Line of audio components was pretty sleek, and to avoid disturbing the lines of his stack, he decided to create a completely new component and dub it the “M301.”

The gutted chassis of an existing but defunct A301 amplifier became the new home for a Mac Mini, Blu-Ray drive, and external hard drive. An HDMI port added to the back panel blends in with the original connectors seamlessly. But the breathtaking bit is a custom replacement hood that looks like what the Braun designers would have come up with if “media center” had been a term in the 70s.

From the brushed aluminum finish, to the controls, to the logo and lettering, everything about the component that never was shows an attention to detail that really impresses. But if you prefer racks of servers to racks of audio gear, this media center built into a server chassis is sure to please too.

Thanks to [Sascho] and [NoApple4Me] for the nearly simultaneous tips on this one.


Filed under: classic hacks, digital audio hacks

by Dan Maloney at April 10, 2017 05:01 AM

April 07, 2017

GStreamer News

GStreamer 1.12.0 release candidate 1 (1.11.90)

The GStreamer team is pleased to announce the first release candidate of the stable 1.12 release series. The 1.12 release series is adding new features on top of the 1.0, 1.2, 1.4, 1.6, 1.8 and 1.10 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

Full release notes will be provided with the 1.12.0 release, highlighting all the new features, bugfixes, performance optimizations and other important changes.

Binaries for Android, iOS, Mac OS X and Windows will be provided in the next days.

Check out the release notes for GStreamer core, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx, or download tarballs for gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

April 07, 2017 02:00 PM

April 04, 2017

Linux – CDM Create Digital Music

Waveform woos DAW switchers with clean UI, features, Raspberry Pi

The struggle to make an all-in-one computer production tool that’s different continues.

Tracktion, a lesser-known “indie” DAW that has seen a rapid resurgence in recent builds, is now back in a new generation version dubbed Waveform. As usual, the challenge is to make something that kind of does everything, and necessarily needs to do all the things the competition does, while still being somehow different from that competition.

Waveform’s answer is to build on Tracktion’s clean UI by making it yet more refined. It builds on its open workflow by adding modular mixing and enhanced “racks” for processing. And it runs on Linux – including Ubuntu 14.04 and 16.04, the Mate desktop environment, and ultra-cheap Raspberry Pi hardware.

waveform-screen-modular-mix

For producers, there’s also some sweeteners added. There’s an integrated synth/sampler called Collective. And you get a whole bunch of MIDI generators, interestingly – atop a step sequencer already included with Tracktion, there are new pattern and chord generators. That’s interesting, in that it moves the DAW into the territory of things like FL Studio – or at the very least assumes you may want to use this environment to construct your ideas.

waveform-screen-composition-tools

waveform-screen-collective-hd

Oh yeah, and there’s a cute logo that looks, let’s be honest here, very reminiscent of something that rhymes with Bro Drools. (sorry)

Obligatory promo vid:

It looks attractive, certainly, and seems to go up against the likes of Studio One for clean-and-fresh DAW (plus standbys like Reaper). But this is a crowded field, full of people who don’t necessarily have time to switch from one tool to another. Pricing runs $99-200 for the full version depending on bundled features, and upgrades are $50 or free for Tracktion users — meaning they’ll be happy, I suspect.

If you’re up for reviewing this, let us know.

https://www.tracktion.com/products/waveform

The post Waveform woos DAW switchers with clean UI, features, Raspberry Pi appeared first on CDM Create Digital Music.

by Peter Kirn at April 04, 2017 10:42 PM

April 03, 2017

open-source – CDM Create Digital Music

This software is like getting a modular inside your computer, for free

Modular synthesizers present some beautiful possibilities for sound design and composition. For constructing certain kinds of sounds, and certain automated rhythmic and melodic structures, they’re beautiful – and endure for a reason.

Now, that description could fit both software and hardware modulars. And of course, hardware has some inarguable, irreplaceable advantages. But the same things that make it great to work with can also be limiting. You can’t dynamically change patches without some plugging and replugging, you’re limited by what modules you’ve got bolted into a rack, and … oh yeah, apart from size and weight, these things cost money.

So let’s sing the praises of computers for a moment – because it’s great that we can choose either, or both.

Money alone is reason. I think anyone with a cheap-ass laptop and absolutely no cash should still get access to the joy of modular. Deeper pockets don’t mean more talent. And beyond that, there are advantages to working with environments that are dynamic, computerized, and even open and open source. That’s true enough whether you use them on their own or in conjunction with hardware.

Enter Automatonism, by Johan Eriksson.

It’s free, it’s open source, it’s a collection of modules built in Pure Data (Pd). That means you can run it on macOS, Windows, and Linux, on a laptop or on a Raspberry Pi, or even build patches you use in games and apps.

And while there are other free modular tools for computers, this one is uniquely hardware modular-like in its design — meaning it’s more approachable, and uses the signal flow and compositional conception from that gear. Commercial software from Native Instruments (REAKTOR Blocks) and Softube (Modular) have done that, and with great sound and prettier front panels, but this may be the most approachable free and open source solution. (And it runs everywhere Pd runs, including mobile platforms.)

Sure, you could build this yourself, but this saves loads of time.

automatonism

You get 67 modules, covering all the basics (oscillators and filters and clocks and whatnot) and some nice advanced stuff (FM, granular delays, and so on).

The modules are coupled with easy-to-follow documentation for building your basic West Coast and East Coast synth patches, too. And the developer promises more modules are coming – or you can build your own, using Pd.

Crucially, you can also use all of this in real-time — whereas Pd normally is a glitchy mess while you’re patching. Johan proves that by doing weird, wonderful live patching performances:

If you know how to use Pd, this is all instantly useful – and even advanced users I’m sure will welcome it. But you really don’t need to know much about Pd.

The developer claims you don’t need to know anything, and includes easy instructions. But you’ll want to know something, as the first question on the video tells me. Let’s just solve this right now:

Q. I cannot get my cursor to change from the pointer finger to an arrow. I can drag modules and connect them but I can’t change any parameters. What am I missing?

A. That’s because Pure Data has two modes of operation: EDIT mode and PERFORMANCE mode. EDIT mode, the pointer finger, lets you drag stuff around and connect cables, while PERFORMANCE mode, the arrow, lets you interact with sliders and other GUI objects. Swap between the two easily under the EDIT menu in Pure Data or by shortcut cmd+e [ctrl-e Windows/Linux]

Now you’re ready!

This is also a bit like software-with-concept album, as the developer has also created a wild, ear-tickling IDM EP to go with it. This should give you an idea of the range of sounds possible with Automatonism; of course, your own musical idiom can be very different, if you like, using the same tools. I suspect some hardware lovers will listen to this and say “ah, that sounds like a computer, not warm analog gear.” To that, I say… first, I love Pd’s computer-ish character, and second, you can design sounds, process, mix, and master to make the end result sound like anything you want, anyway, if you know what you’re doing.

Johan took a pretty nerdy, Pd purist angle on this, and … I love it for what it is!

AUTOMATONISM #1 by Automatonism

But this is truly one of the best things I’ve seen with Pd in a long time — and perhaps the best-documented project for the platform yet, full stop.

It’s definitely becoming part of my music toolkit. Have a look:

https://www.automatonism.com/

The post This software is like getting a modular inside your computer, for free appeared first on CDM Create Digital Music.

by Peter Kirn at April 03, 2017 08:57 PM

April 01, 2017

digital audio hacks – Hackaday

Cerebrum: Mobile Passwords Lifted Acoustically with NASB

 

There are innumerable password hacking methods but recent advances in acoustic and accelerometer sensing have opened up the door to side-channel attacks, where passwords or other sensitive data can be extracted from the acoustic properties of the electronics and human interface to the device. A recent and dramatic example includes the hacking of RSA encryption  simply by listening to the frequencies of sound a processor puts out when crunching the numbers.

Now there is a new long-distance hack on the scene. The Cerebrum system represents a recent innovation in side-channel password attacks leveraging acoustic signatures of mobile and other electronic devices to extract password data at stand-off distances.

Research scientists at cFREG provide a compelling demonstration of the Cerebrum prototype. It uses Password Frequency Sensing (PFS), where the acoustic signature of a password being entered into an electronic device is acquired, sent up to the cloud, passed through a proprietary deep learning algorithm, and decoded. Demonstrations and technical details are shown in the video below.

Many of these methods have been shown previously, as explained by MIT researcher T. M. Gil in his iconic paper,

“In recent years, much research has been devoted to the exploration of von Neumann machines; however, few have deployed the study of simulated annealing. In fact, few security experts would disagree with the investigation of online algorithms [25]. STEEVE, our new system for game-theoretic modalities, is the solution to all of these challenges.”

To counter this argument, the researchers at cFREG have taken it to a much higher and far more accurate level.

Measurements

The Cerebrum team began their work by prototyping systems to increase the range of their device. The first step was to characterize the acoustic analog front end and transducers with particular attention paid to the unorthodox acoustic focusing element:

The improvements are based on the ratio of Net Air-Sugar Boundaries (NASB) using off-the-shelf marshmallows. Temperature probing is integral for calibrating this performance, and with this success they moved on to field testing the long-range system.

Extending the Range

The prototype was tested by interfacing a magnetic loop antenna directly onto the Cerebrum through a coax-to-marshmallow transition. By walking the street with a low-profile loop antenna, numerous passwords were successfully detected and decoded.

War Driving with PFS

To maximize range, additional antenna aperture were added and mounted onto a mobile platform including a log periodic, an X-band parabolic dish, and a magnetic loop antenna to capture any and all low frequency data. In this configuration it was possible to collect vast quantities of passwords out to upwards of ½ of a mile from the vehicle resulting in a treasure trove of passwords.

 

Without much effort the maximum range and overall performance of the Cerebrum PFS was dramatically increased opening up a vast array of additional applications. This is an existing and troubling vulnerability. But the researchers have a recommended fix which implements meaningless calculations into mobile devices when processing user input. The erroneous sound created will be enough to fool the machine learning algorithms… for now.


Filed under: digital audio hacks, Fiction, security hacks

by Gregory L. Charvat at April 01, 2017 11:01 PM