February 13, 2020

Audio – Stefan Westerfeld's blog

SpectMorph 0.5.1 released

A new version, SpectMorph 0.5.1 is available at SpectMorph is a VST/LV2/JACK synthesis engine which is based on the idea of analyzing audio samples and combining them using morphing.

As you can see in the screenshot, there are a few new LFO wave forms available (saw, square and random).

On Windows and macOS, from the beginning there was no need for users to compile anything. You could just download SpectMorph, install it and use it. On Linux, we provide packages for Ubuntu and there are also distribution packages for Arch Linux. But this means that as a user, if you use a different linux distribution, you had to build SpectMorph from source. Which may be too difficult for the average user.

This release improves the situation: there are now Generic 64bit Linux binaries available, which provide the VST/LV2 plugin (statically linked). So these binaries should run on just about any linux. Note that this is a new feature, so please let me know if the generic binaries don’t work for you.

This release also contains a few fixes and the detailed list of changes can be found here.

Finally let me recommend two youtube videos (if you haven’t watched these yet):

by stw at February 13, 2020 10:51 AM

February 07, 2020

Is Open Source a diversion from what users really want?

@paul wrote:

When I started working on Ardour, it never occured to me to do anything other than use the GNU Public License (GPL), the most well-known way to release “open source” software. At that time, it was a choice driven by a combination of:

  • my passionate belief in what is more appropriately called "free/libre software"
  • an awareness that I'd probably need help developing Ardour. The open source model seemedto me the best way to make it possible for others to contribute (no matter what their motivations might have been).
  • the desirability of being able to use dozens of software libraries released under GPL-related licenses

Of course, developing software with complexity on the level of Ardour’s is never going to be easy, and finding other people willing and able to contribute to such a project is always going to be hard, whether you’re an open source project or a proprietary company.

However, underlying both of those reasons why I wanted to use the GPL was a conviction the access to the source code was critical to both:

  1. giving users the freedom they deserved
  2. attracting developers (or even just "power users") to contribute to the project.

I remain convinced that access to source code is a fundamental part of the "four freedoms" that Richard Stallman has outlined as the basis of the concept of "free/libre software". But as described at great length and exhaustive detail by Berlin-based electronic musician and developer Louigi Verona, it's not quite that simple.

Meanwhile, how could anyone really contribute to the project in any substantive way without source code access? If they were going to add functionality, or extend it or in some other way modify it, source code access seems like a basic and absolute requirement.

A recent thread on our forums has made me revisit these assumptions, and this has led me to have some doubts about what "open source" really means for a project like Ardour.

Forum member Musocity wants to be able to extend Ardour without having to build the program from source. If you read the thread, you'll see both myself and co-developer Robin Gareus pushing back on this concept several times. Nevertheless, Musocity continues to use Reaper as a counter-example in which much greater levels of user-driven "extensions" are possible without any access to the source code and without any requirement to rebuild the program.

My gut reaction continues to be something along the lines of

Are you kidding me? We give you full access to everything in Ardour, not just some pre-selected functionality exposed via a scripting language. You can build it on almost any platform, add/remove/enhance almost anything you can imagine ... and you keep pushing for a 2nd-rate scripting interface just so that you can do stuff without dealing with the build process?

But this forum thread has made me keep returning to two points I mentioned above. Specifically in the form of follow up questions:

  1. are users truly being given freedom by confronting them with a technological infrastructure that almost none of them can comprehend?
  2. does the requirement to rebuild the program, or from a different perspective, to write C++ code, attract or deter developers to/from the project?

These are not easy questions to answer.

Let's start by pointing out what Ardour already offers: a very sustantial Lua API providing access to the majority of the program, and with it the ability to write both DSP code and higher-level functionality, all without rebuilding Ardour or dealing with C++. This has all been Robin Gareus' effort, and he has done an amazing job (aided by just how suitable Lua is for this sort of thing - partly why Reaper uses it too, no doubt). What is missing is the ability to construct arbitrary graphic user interface components from Lua. This puts distinct limits on what can be done with scripting in Ardour, even given how much is already possible.

Reaper stands as an existence proof of what can be done when the scripting capabilities are essentially all-encompassing. Ableton Live, with both Max4Live and other "scripting systems" offers slightly less extensive capabilities, but still somewhat more than Ardour in terms of GUI integration.

Nevertheless, it remains the case that nobody except for Reaper's (or Live's) developers can modify fundamental aspects of the program. The work that we have been doing on Ardour 6.0 would never have been possible to do via Lua, and that would be true in the context of Reaper or Live as well. So the first thing that we need to note is:

  1. the scripting interface for a DAW can vary in terms of what it makes possible to accomplish.
  2. this is particularly true in terms of integration into the "main body" of DAW's own GUI.
  3. no matter what the scripting interface offers, it does not allow anyone to do fundamental work on the internals of the DAW. A DAW that cannot do cue monitoring will never become one that can because of a script extension. The same goes for full latency compensation, misdesigned region/clip lists and an almost inexhaustible list of other features that cannot be implemented via script extension system.

Musocity, it appears, doesn't really care about any of this: they've seen what you can do with Reaper's scripting interface, and it seems entirely reasonable to them that the same ought to be true in Ardour whether or not the entire program's source code is available.

Which brings us to the second aspect of why this is complicated. Even 20 years ago there were full time web developers. There were also so-called "application developers" who typically worked entirely inside database-connected development tools to create ways to view and edit data. In the two decades since Ardour started, we've seen an entire generation of people whose job description includes "programming" but who have never (or almost never) compiled a piece of software in their life. The web development infrastructure that has grown up over the last two decades has seen huge numbers of people creating software in ways that never require them to take "source code" and transform it into "a program". They write "scripts" (be it in Javascript, Java, Python, Ruby or whatever they prefer) and the necessary magic happens to ensure that what they've written actually executes, somehow. Even for the most sophisticated web development stacks, where there's some notion of "build systems" and "deployment", these only have a superficial resemblance to the workflow involved with a desktop application written in a compiled language.

In my limited experience interacting with people who develop on a web stack, the build process for a program like Ardour is frequently a massive stumbling block to any participation they might have considered. They may not mind dealing with poorly documented (or undocumented) APIs, complex data structures and mind-boggling program control flow. But tell them that after they make a change, they have to "build" the program and that in some circumstances this will take several minutes to complete ... enthusiasm starts dying rapidly. And now consider what happens when these developers are on platforms (primarily Windows and macOS) where they cannot issue a single command (e.g. apt-get build-dep ardour) to set up their build environment, but must painstakingly build/install 2GB of source code-provided 3rd party libraries, before they can even build Ardour for the first time.

It's not entirely surprising that a project like this doesn't have many active developers at any given point in time!

Of course, there are other factors too: really getting into Ardour development means being comfortable with (in no particular order): real time programming, parallel/thread programming, cross-platform development, C++ idioms, model-view-controller design, the GTK+ toolkit, some level of DSP knowledge, a non-trivial understanding of audio and MIDI hardware, the MIDI protocol, and lots more. Even if Ardour was paying more developers, it would be extremely hard to find people with the right background and outlook.

When I released Ardour under the GPL, my vision was that by virtue of it being an open source project (technically orthogonal to its status as "free/libre software"), it would be possible, even encouraged, for other developers to participate and get involved in extending its capabilities. Musocity's forum thread, and their insistence that "all this should be possible by scripting" has made me wonder if this belief was ever true, and in particular if it is still true.

Why isn't the Reaper model better? Technically-inclined users can do insane things with a script, and in so doing can easily address most of the things that particular users want the program to do. Almost no Reaper user cares that they cannot build Reaper from source, cannot modify the fundamentals of the program, cannot redistribute a modified Reaper to their colleagues/friends. It matters much more to them that someone outside of the Reaper team can cook up a script that can do "just about anything". That's what freedom looks like to Reaper users (or so it appears), and giving them the source to Reaper would barely change that, if at all.

Verona touches on so many aspects of this in his piece. The demographics/background of computer users in 2020 is so very, very different from the way things were when Stallman began the concept of "free/libre software". Back then, "freedom to tinker" really did mean the freedom to read and edit source code, and to rebuild programs from source. Today, even though this is still a foundational aspect of the concept of "free/libre software", the freedom that many users want doesn't come from source code access at all. It comes from applications that enable their users to easily customize things to "about the level that most users care about".

Nevertheless, the concept of free/libre software is still vitally important to me and millions of other people. As mentioned above, even the best script extension system (or any form of program customizability) cannot replace source code (and build system) access in terms of providing the kind of freedom to tinker (and thus freedom to learn) that Stallman (and many others) envisaged.

But perhaps for applications like Ardour, ones that do not yet exist, there ought to be a different development pathway. I remember once wondering if we should have implemented the entire GUI in PyGTK (i.e. Python). We didn't, and most of my curiosity was about whether it would have helped or hindered our development process. However, had we done so, one of the consequences would have been that many changes to the program would have been made simpler, easier to access and would require no "rebuild".

I wonder if going forward, large-scale apps like Ardour ought to (as Reaper did relatively early in its life) consider the "script extension system" to be a vital and critical part of the application infrastructure. This would mean, for example, writing large parts of "core functionality" using this system, rather than dropping back into C++ to get things done. There are precedents for this: GNU Emacs, for example, is at some level written in C, but almost everything about the program is actually constructed in Emacs Lisp, its own "scripting extension". The C core of Emacs is so small and so irrelevant that it almost doesn't matter that it is there: if you want to modify or extend Emacs, you (almost always) write Lisp, not C.

Forcing the "core" developers to, as the saying goes, "eat the same shit" as regular users forces them to focus their attention on the quality of said "shit". Removing the need to rebuild the application after most changes opens the application to contributions from people who cannot deal with (1) the idea of compilation and/or (2) the reality of compilation.

Would we have attracted more developers over the years if Ardour had been more accessible to programmers with skills in Python, Javascript or Ruby? It's hard to know. I have no idea how many people (absolute or as a fraction of the user base) have written notable extensions for Reaper (or Live). It's possible that it would make no difference whatsoever, and would merely divert developer time away from one level (C++) where we can function efficiently and happily and divert it to another ("scripting extension") that doesn't actually enable much at all.

I don't know that answers to any of the questions I've mentioned above. I do know that Robin did an amazing job bringing an incredible level of scripting with Lua into Ardour, and that the things you can't do with it are very much a result of our joint intention - the intention that people who want to modify or extend Ardour should plan on working on the (open) source code for the program, not by convincing us to expand to scope of our scripting support.

But perhaps that's wrong. What do you think?

Posts: 69

Participants: 27

Read full topic

by @paul Paul Davis at February 07, 2020 06:51 AM

February 04, 2020

digital audio hacks – Hackaday

How To Hack A Portable Bluetooth Speaker By Skipping The Bluetooth

Portable Bluetooth speakers have joined the club of ubiquitous personal electronics. What was once an expensive luxury is now widely accessible thanks to a prolific landscape of manufacturers mass producing speakers to fit every taste and budget. Some have even become branded promotional giveaway items. As a consequence, nowadays it’s not unusual to have a small collection of them, a fertile field for hacking.

But many surplus speakers are put on a shelf for “do something with it later” only to collect dust. Our main obstacle is a side effect of market diversity: with so many different speakers, a hack posted for one speaker wouldn’t apply to another. Some speakers are amenable to custom firmware, but only a small minority have attracted a software development community. It doesn’t help that most Bluetooth audio modules are opaque, their development toolchains difficult to obtain.

So what if we just take advantage of the best parts of these speakers: great audio fidelity, portability, and the polished look of a consumer good, to serves as the host for our own audio-based hacks. Let’s throw the Bluetooth overboard but embrace all those other things. Now hacking these boxes just requires a change of mindset and a little detective work. I’ll show you how to drop an Arduino into a cheap speaker as the blueprint for your own audio adventures.

Directing the Hacker Mindset at Myriad Bluetooth Speakers

There’s way too many different speakers out there for one hack to rule them all. But by changing our Bluetooth speaker mindset from “it’s a reprogrammable computer” to “it’s an integrated collection of useful electronic components”, we turn market diversity into our ally.

Look at this from the perspective of Bluetooth speaker manufacturers: they want their Bluetooth speaker to stand out from competitors, and the most obvious way is in their selection of loudspeaker drivers. Surprising the customer with big sound from a little box is key for success, so each product can offer a unique combination for driving the audio, all housed inside an eye-catching enclosure that lets consumers tell one portable Bluetooth speaker from another.

Tailoring for loudspeaker selection has cascading effects through the rest of the system. For best sound, they will need matching audio amplifier modules, which will have their own power requirements, which dictates battery performance, and so on. Catering to these desires, components are excluded from the tightly integrated mystery black boxes. Fortunately for hardware hackers, such an architecture also makes components easy to reuse:

  1. A rechargeable battery.
  2. Ability to charge that battery from USB.
  3. A low-power standby mode to monitor press of the power button.
  4. Protecting battery from over-discharge.
  5. A voltage regulator supplying battery power to the device.
  6. An audio line-in jack.
  7. Volume up/down control.
  8. Amplifier and driver.

All of these are useful for projects, already neatly packaged in a mass-produced enclosure.

Putting Theory Into Practice With An Example

Now that we have a general background, let’s apply this concept to a specific example. But before we begin, an obligatory note in case it is not obvious to any beginners reading this: This activity very definitely voids the warranty (do it, it’s worth it!), and modern portable electronics use lithium chemistry batteries that can be dangerous if mistreated.

The Bluetooth speaker used in this example is a “Rugged Portable Bluetooth Speaker” sold by North American electronics retailer Best Buy under one of their house brands. A search of its FCC ID pointed to Lightcomm Technology Co. as the manufacturer. The “rugged” claim starts with a layer of soft rubber wrapped around its exterior. That plus reinforcements inside the case allows the speaker to absorb some level of abuse. I wanted to preserve this shock absorbing exterior and, thankfully, it was easy to open non-destructively. Even more care would be needed if it was a waterproof speaker (this one wasn’t) and moisture barriers need to be preserved. Alternatively, if the plan is to transfer the internals to another enclosure, the condition of the original box would not matter.

Once the circuit board has been extracted, the Bluetooth interface module should immediately stand out as the most sophisticated component sitting close to an antenna. A search for ATS2823 confirmed it is a module designed and sold for integration into Bluetooth audio products. Its MIPS M4K core and associated flash storage could be a promising start for firmware hacking, but the point of this example is to demonstrate how to hack a speaker utilizing existing firmware. So we will leave the module as-is.

Solder to the External Audio Input

The easiest way to pipe audio into this system is to pretend to be an external audio source. We want the system to believe we are connected via an audio cable plugged into the line-in jack, but for compactness we’d prefer to do this without using an actual cord. This approach is easy, nondestructive, and preserves the existing volume control mechanism. There are a lot of different ways to implement an audio jack, so some exploration with a multimeter will be required. We need to find the standardized contacts for: audio input left channel, right channel, and ground. (Wikipedia reference: “Phone connector (audio)“)

It’ll be a little tricker to decipher the plug detection scheme, as it is not standardized. In this particular example, there is a fourth pin that floats in the absence of an audio plug. When an audio plug is present, the pin is grounded. Soldering a wire to always ground that detection pin will keep this speaker constantly in “playing external audio” mode.

Or Connect To Amplifier Directly

An alternative approach is to bypass existing input and volume control, sending audio directly to the amplifier chip. To find this chip, we start with the voice coil wires and backtrack. It’ll likely be the largest component near those voice coil wires. Once the amplifier chip is found, consult the datasheet to find the input pins to cut free from the circuit and rewire for audio input that bypasses existing control.

But even if we wish to maintain existing volume control, it is still useful to locate the audio amplifier chip. It is the most power-hungry component on the circuit board, and peak power requirements for the system are dictated by the amount of power this amplifier will draw when playing loudly. Therefore it is half the puzzle of calculating our available power. This particular Bluetooth device uses a Mixinno MIX2052 chip sitting adjacent to the voice coil wire connector, with a peak power of 6 watts.

Tap Into Power Supply

The other half of the puzzle is the voltage regulator delivering power to the amplifier chip. Similar to how we look for our amplifier near our voice coil wires, we can look for our regulator sitting near inductors, capacitors, and diodes. Once the power module is found, read its data sheet to determine peak power output.

The power budget for our hack would be constrained by power figures for those two components. Most microcontrollers consume maximum power during bootup. So as long as the audio source stays quiet during this time, we would have a little extra power to support boot. Somewhere between the regulator and the amplifier is also the best place to tap power. It allows us to piggyback on the existing power management circuit that shuts down the amplifier when entering low power mode, cutting power to our hack at the same time.

In the case of this board, there was one prominent coil and a Techcode TD8208 step-up regulator was found next to it. Configured to deliver 5 volts, this regulator can deliver 1A and tolerate brief spikes not to exceed 2A. This wouldn’t be enough to feed a Raspberry Pi 4, but plenty for an Arduino Nano.

Repurpose Control Button

So far functionality for three of the four buttons on this speaker has been preserved: power, volume up, and volume down. The fourth button initiates Bluetooth pairing, or to pick up a phone call. We’re cutting BT out of the equation so this is no longer useful and can be repurposed.

On this speaker, SW4 is normally open and pulls to ground when pressed, making it trivial to reuse. I cut the trace leading to the Bluetooth interface module and soldered a wire so the switch now pulls an Arduino pin to ground when pressed.

Tuck Everything Back In

A few pieces of internal plastic reinforcements for ruggedness were cut away to create enough volume for an Arduino Nano inside this enclosure. It is no longer quite as rugged, but now it is far more interesting as a platform for sound hacks. To conclude this proof of concept, the Arduino Nano is using the Mozzi audio library to play the classic Wilhelm scream whenever our repurposed button is pressed.


Build Your Own Bleepy Bloopy Buzzy Box

Bluetooth used to be the novelty. With plenty of hacks adding Bluetooth to existing audio equipment, playing Bluetooth audio out of one, or building our own Bluetooth speakers from scratch. But now Bluetooth speakers are ubiquitous, we’re approaching the point where Bluetooth is not necessarily the center of attention. Skipping the Bluetooth in a portable Bluetooth speaker gives us a new platform for our noise maker hacks. Something small, fun, and easy to bring to our next hacker show-and-tell meetup!

by Roger Cheng at February 04, 2020 03:01 PM

February 01, 2020

News – Ubuntu Studio

Ubuntu Studio 20.04 LTS Wallpaper Contest

As we begin getting closer to the next release date of Ubuntu Studio 20.04 LTS, now is a great time to show what the best of the Ubuntu Studio Community has to offer! We know that many of our users are graphic artists and photographers and we would like to... Continue reading

by eeickmeyer at February 01, 2020 08:45 PM

January 31, 2020

SFZ Format News

New year, new work in progress

The most relevant additions on the website for this month were Instruments and Modulations sections, adding slowly one by one some sample instruments libraries created and freely distribuited over the net, and documenting in a generic way the various modulations used in SFZ. Some new opcodes were also added in our database, starting from some modulation aliases like amplitude_ccN, pan_ccN and tune_ccN to the recent fil_gain. I would like to thank some people who contributed to the site, like falkTX for adding our news feed on Linuxaudio Planet, jpcima, MatFluor, PaulFd and sfw. This website is an opensource non profit project, I hope to see more people involved in the future to help make it grow.

by redtide at January 31, 2020 12:00 AM

January 29, 2020

KXStudio News

KXStudio Monthly Report (January 2020)

Hello all, another monthly report about the KXStudio project is here.

A few days ago, Carla 2.1-RC1 was announced.
As mentioned in that post, Carla's frontend move to C++ has started, for performance, reliability and debugging reasons.
It is going to be something that, even though means a lot behind the scenes, visibly nothing will change. (except performance)
Because of this, do not expect many UI related changes in Carla for the time being.

There were more package updates in the repositories. Those are:

  • lsp-plugins updated to 1.1.13
  • x42-plugins updated to 20200114
  • distrho-ports updated (added Temper as LV2 and VST plugin)
  • bchoppr added
  • bslizr added
  • bsequencer added
  • bshapr added
  • geonkick added
  • mod-cv-plugins added
  • noise-repellent added
  • regrader added

A few of those were made possible thanks to LibraZik project, from which I imported a few.
I am quite grateful for them, and you should be too! :)

On a more personal side of things, I have started renting an office for work (both for employer and FLOSS stuff).
Its setup took most of the time on the holidays, and quite a fair bit in January too.
It is mostly done now, only final touches needed. It certainly helps as a kind of motivation boost, and as a way to keep focus too.

Next month will be slower than usual, as I plan to focus more on "boring" stuff like updating the website and documentation.
That is all for now.

Since I mentioned it, I leave you with a picture of the office (the working area).
See you next month!


by falkTX at January 29, 2020 10:37 PM

January 23, 2020


Intersecting Intel & AMD Instruction Set Extensions

In some of my projects, I’ve recently had the need to utilize FMA (fused-multiply-add) or AVX instructions. Compiling C/C++ on X86_64 will by default only activate MXX and a few of the early SSE extensions. The utilized instruction set basically predates the core2 which was introduced in 2006. Math…

January 23, 2020 11:33 AM

News – Ubuntu Studio

Ubuntu Studio 19.04 reaches End Of Life

Our favorite Disco Dingo, Ubuntu Studio 19.04, has reached end-of-life and will no longer receive any updates. If you have not yet upgraded, please do so now or forever lose the ability to upgrade! Ubuntu Studio 20.04 LTS is scheduled for April of 2020. The transition from 19.10 to 20.04... Continue reading

by eeickmeyer at January 23, 2020 12:00 AM

January 20, 2020


The Big Crash at KUNSTEN.NU

My upcoming solo exhibition The Big Crash at Spanien19c in Aarhus Denmark from 8. February till 8. March is in KUNSTEN.NU

by herrsteiner ( at January 20, 2020 10:14 PM

January 19, 2020

KXStudio News

Carla 2.1 RC1 is here!

Hello again everyone, it is release day! (kinda, just a casual 4 days late...)

This is the announcement of the first release candidate of Carla 2.1.
I am skipping the beta phase as done for the 2.0 release and going straight into a Release Candidate.
This means there will be no more changes in the graphical user interface or engine/backend features, except when required for fixing bugs.

Carla projects/sessions are meant to be fully compatible between between 2.0 and 2.1 versions, except for features marked experimental.
The "native" API to access carla as plugin (as used by LMMS) is ABI and API-wise backwards compatible compatible with 2.0.
If this is not the case, consider it a bug that needs to be fixed.

As with the v2.0 release, the list of changes is a little big, so let's split it by parts.
First, the highlights and major changes, in no particular order of relevance.


Better CV Support

CV ports are now supported in the internal patchbay mode, meaning you do not need to use JACK with Carla in order to use CV plugins.

Automable parameters can now be exposed as a CV port, so they can be controlled by regular CV sources or other plugins.
This is a kinda feature preview, as there are some limitations at the moment:

  • Parameter changes are not sample accurate
    (in a later version, Carla will split buffer up to 32 frames for more fine-grained control changes)
  • Not all plugin formats and parameter types are allowed to be controlled this way
    (to be extended as I test more compatibility)
  • Only available for parameter inputs, not outputs

In order to make CV more useful by default, a new internal "MIDI to CV" plugin was added, originally created by Bram Giesen.
More plugins will be added as needed, for now I recommend to use ams-lv2 and mod-cv-plugins as they already do a lot.

Also, a new variant of Carla as plugin was created that provides audio, MIDI and 5 CV ports (for each side).
This allows CV signals to flow in and out of Carla as a plugin.


High-DPI support (work in progress)

Initial work was done to support high-DPI screens.
Note that this was not tested very extensively, due to lack of proper hardware, but the requirements in terms of code are all there.
There are still a few "normal" resolution bitmaps in use, to be replaced in future releases.
You can click on the screenshot on the left to see Carla rendered at 3x the resolution.

So for now, the situation is:

  • Most of the icons changed to scalable format
  • UI will scale with the desktop automatically, as Qt takes care of that for us
  • Some bitmaps still remain, to be replaced by vector images in a future release
  • Not extensively tested, feedback is welcome


Proper theme and Carla-Control for Windows

The Windows build stack changed from using official Python and PyQt5 packages to msys ones, allowing us to link against them using mingw (Carla does not support MSVC)
This makes it possible to use the proper "pro" theme like Linux and macOS already did, and also get Carla-Control finally working on Windows.

Previously, the Carla Windows builds were using Qt's "fusion" theme (which the Carla "pro" theme is based on), which looks very similar but misses all of custom tweaks made for Carla.
This includes, for example, preventing pop-up menus from taking the entire screen or ugly thick lines being drawn where a small one was expected.

A small but important step towards cross-platform feature parity. \o/


VST2 plugin for macOS and Windows, plus exposed parameters

This is the final item that was missing for cross-platform feature parity.
We now have Carla as VST2 plugin running on both macOS and Windows!

Embedding of the full GUI on these systems is not possible, so a small "middleware" window is shown as the plugin custom UI.
Not the best experience, but allows Carla to finally work as VST2.

Additionally, 100 parameters are exposed to the host, dynamically used in the order of the plugins loaded.
So for example, if the first plugin in the rack has 20 parameters, the first 20 parameters of carla-vst will be mapped to that plugin.
This continues in order for the remaining plugin parameters until we reach 100 of them.

When Carla is loaded as an internal plugin, parameters will be dynamically available too.
This feature is not available in the LV2 version of Carla though, at least not yet.

Note: Carla plugins are not "notarized" yet, so they will not run under latest macOS 10.15/Catalina where this is a requirement.


Wine-native bridge, sorta experimental

This is a way to load Linux binaries under Windows applications running with Wine, in case you need that for some reason
Personally I made it so that I could run the native Carla inside FL Studio, which allows me to use its sequencer but not have to deal with Windows plugins.

This is available in the KXStudio repositories as "carla-vst-wine" package, you need to copy /usr/lib/winvst/Carla* into your Wine VST dll folder to make it work.
It requires Carla to be installed system-wide, so it cannot work if Carla is downloaded manually.

Building it is kinda tricky, as it requires building a native-windows dll first, and then a few things with winegcc...
Packager documentation will be added soon to Carla's source code repository, so other Linux distributions can pick it up.

I demoed this feature at Sonoj last year (2019), you can watch it as the 3rd part of this video.


Refreshed add-plugin dialog and favorite plugins

The add-plugin dialog had a major overhaul, now looking much better and with more content visible at once.
Target was to improve the user experience, making clear that there are filters available. (it was not so obvious in previous versions)

The star on the most-left section of the table is to mark a plugin as a favorite, which will add it as a shortcut to the right-click menus on empty rack and patchbay areas.


Single-page and grouped plugin parameters

The dialog for the generic plugin parameter view also had an update.
All parameters are now placed in the same tab (separated only by input and output types), and grouped when supported by the plugin.
The options for mapping a parameter to a MIDI CC were taken out and replaced by a button that triggers a menu with the relevant options.

Note that, at the moment, only a few LV2 plugins support parameter groups.
This is because most hosts do not support this feature, so plugins do not have many incentives to support such a thing.
And with not a lot of plugins supporting it, hosts also do not care that much. The usual circular dependency deal...
But since the feature applies quite nicely to Carla, made sense to add it.

The group can be collapsed by clicking on it.

A similar feature will be added to the patchbay in a later release, so we can group audio ports too. :)

More UI changes

The rack items will dynamically show as many knobs as possible
You can now change the "skin" and color of any rack item, making it easy to identify certain plugins
Added buffer-size, sample-rate and xrun information to the status; clicking on the xrun counter will reset it to zero

Canvas changes

Right-clicking on a canvas group will show options for quickly connecting all ports to another group
Many small tweaks and fixes, plus a few extra actions, as contributed by Nikita Zlobin (to be documented on the user manual)
Support for Ardour-style inline-displays, marked experimental in this release (sadly cannot be made stable until Carla v3.0)

Carla-control and OSC rework

Carla's OSC support has been reworked, now has its own dedicated page in the settings.
Carla-Control has been extended to support all non-local-dependent features of the main Carla (like patchbay management and transport controls).
This will be extended even further in future releases.

AU and VST3 support is back, by leveraging JUCE

Disabled during a previous 2.0 beta release, support for the JUCE library was removed and replaced by a heavily stripped-down version of it. (while it was still GPlv2 licensed)
The reasons for that decision still remain relevant, but in order to keep in mind with Carla's goals, I decided to add back JUCE support - but now completely optional.
It will always be possible to build Carla without JUCE, it is only used for extra hardware and plugin format support.
In fact, Linux builds by default do not use it, as there is no need for it.

Anyway, the published macOS and Windows Carla builds do use JUCE, which means Carla supports VST3 under macOS and Windows, and AU under macOS.
As a bonus, it is now possible to show the custom control panel of ASIO devices. :)

Worth noting is that JUCE does not support VST3 under Linux at this point, so neither does Carla even if you build it yourself with JUCE enabled.

Other changes

Within a bunch of small fixes and new implementations, here are some changes that deserve to be mentioned:

  • Carla now requires Qt5, can no longer work with Qt4; but can still use LV2 Qt4 UIs with its built-in bridges
  • NSM is now supported for JACK applications
  • Added a 16 MIDI port mode for JACK applications
  • Added "Cancelable actions" during project and plugin bridges load, so they will no longer time-out; instead the user has the option to cancel them at anytime
  • Initial support for LV2 parameter API
  • Initial support for LV2 file paths, assuming plugin has no custom UI (click on the show-gui button to open a file dialog)

Notes for developers and packagers

  • Linking against the JACK library directly is now possible by using `make JACKBRIDGE_DIRECT=true`, which allows for building Carla as an internal client

Notes for users

The code for scanning plugins had a little rework, again, making some internal data structures change.
Because of this, a full rescan of your plugins is needed after the update.


To download Carla binaries or source code, jump on over to the KXStudio downloads section.
If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.

Future and final notes

I have started a change of the Carla's frontend coding language, from Python to C++ (for performance, reliability and debugging reasons).
There are a few canvas related things, currently experimental, that can never be made stable or fast due to how Python/PyQt works.
Also Carla is not scaling very well at the moment, and the addition of CV controlled parameters and inline-displays does not help its case.
So a move of the entire frontend to C++ makes quite a lot of sense.
Whenever this is finished a new release will be made.
But it is going to be something that, even though means a lot behind the scenes, visibly nothing will change. (except performance)
Because of this, do not expect many UI related changes in Carla for the time being.

A user manual for Carla has been started.
It proved to be quite helpful for development too, as I had to justify why things are the way they are, and explain how they work too.
Now that Carla UI should not change too much for a while, it is the right time for such thing.
I personally dislike writing such things, but understand it can be quite useful.
The work-in-progress manual is at
(Not much to see there at the moment though, give me time)

That's it.
Please remember that this is a release candidate, and not the final release.
Some issues are expected, I will do my best to fix all reports that get to me.
If I don't know about the issues though, I can't fix them. So please report any issues you find, thanks!

by falkTX at January 19, 2020 02:13 PM

January 18, 2020

The Linux-audio-announce Archives

[LAA] Yoshimi V 1.7.0


Just one visible change, but a major one.
Instead of a few controls giving an immediate response, there are only a few that don't :)

More details are in /doc/Yoshimi_1.7.0_features.txt

Yoshimi source code is available from either:

Full build instructions are in 'INSTALL'.

Our list archive is at:
To post, email to:
yoshimi at

Will J Godfrey
Say you have a poem and I have a tune.
Exchange them and we can both have a poem, a tune, and a song.

by willgodfrey at (Will Godfrey) at January 18, 2020 03:20 PM

January 17, 2020

The Linux-audio-announce Archives

[LAA] Ninjas2 sample slicer plugin released (v.0.2.0)

Hi all,

I've updated Ninjas2 audio sample slicer plugin.
Source and binaries (linux/windows/mac) are available at :
readme :

>From the readme:
Easy to use sample slicer, quick slicing of sample and auto-mapping
slices to midi note numbers.

# Intended usage:
Primarily targeted at chopping up loops or short (  10 - 20 seconds)
samples. Think drum loops, vocal chops etc. Currently there's no limit
on imported sample length. User can play the slices using midi notes
and change the pitch with midi pitchbend.

# Downloads:
Linux, Windows and Mac binaries for several architectures are
available here. There are no installers, just unzip and copy the
plugin to an appropiate location.

# New Features
 - redesigned interface
 - controls are grouped in Global, Slicing and Slice
 - the Slice box shows the currently selected slice number
 - keyboard
     - click on key to play slice
     - red dot on key indicates which slice is currently selected in
the waveform display
     - keys that don't have a slice mapped to them are greyed out

# Known Bugs and Limitations
- some host don't work very well with the lv2 version
     - zrythm and qtractor had trouble with the lv2 version but worked
fine with the vst
     - ardour, carla and muse3 worked well with the lv2
- care should be taken when automating the playmodes and adsr
     - the automation is sent to the currently played note (slice),
when multiple slices are played, this leads to "undefinied behaviour"

by rghvdberg at (Rob van den Berg) at January 17, 2020 12:12 PM

January 15, 2020


Notstandskomitee musicvideo

Musicvideo for the track Ultracapacitor by Notstandskomitee from the album Deleted, released at the end of 2019 by the French label Serendip Lab. CGI created in Blender...

by herrsteiner ( at January 15, 2020 09:33 PM

January 06, 2020

digital audio hacks – Hackaday

Organic Audio: Putting Carrots as Audio Couplers To The Test

[Boltz999]'s carrot interconnect.
[Boltz999]’s carrot interconnect.
If there’s one thing that gives us joy here at Hackaday it’s a story of audio silliness. There is a rich vein of dubious products aimed at audiophiles which just beg to be made fun of, and once in a while we oblige. But sometimes an odd piece of audio equipment emerges with another purpose. Take [Boltz999]’s interconnects for example, which were born of necessity when there were no female-to-female phono adapters to connect a set of cables. Taking a baby carrot and simply plugging the phonos into its flesh delivered an audio connectivity solution that worked.

Does this mean that our gold-nanoparticle-plated oxygen-free directional audio cables are junk, and we should be heading for the supermarket to pick up a bag of root vegetables instead? I set out to test this new material in the secret Hackaday audio lab, located on an anonymous 1970s industrial estate in Milton Keynes, UK.

Characterising A Root Vegetable

The high point of an engineer's life comes as they measure the electrical properties of a root vegetable.
The high point of an engineer’s life comes as they measure the electrical properties of a root vegetable.

A quick search on the composition of a carrot reveals an 88% water content, with the other 12% being mostly carbohydrates, followed by small quantities of fat, protein, and a cocktail of those vitamins and minerals that caused our parents to be so enthusiastic about our younger selves eating them. In particular about 0.4% of a carrot is comprised of potassium, sodium, and calcium ions in solution, making the vegetable analagous to a sponge soaked in a weak electrolyte solution. Thus you’d expect it to be conductive, and to pass a line-level audio signal into a high-impedance load such as an audio amplifier. A quick DC resistance measurement of our test carrot showed a resistance that started at about 50K for distances up to about 10mm, rising slowly to near 100K across its roughly 80mm length. It’s probably beyond the scope of this piece, to characterise the complex impedance of a carrot.

Attenuator Π-section circuit by SpinningSpark CC-BY-SA 3.0. R2 represents the width of the carrot, R1 and R3 each represent the carrot distance between pin and outer of a phono plug.

The hack that prompted all this though didn’t simply replace the pair of copper wires with ones made of carrot. By plugging the phono into the tap root an extra 50K resistance is created between its two conductors as well as between it and the other phono, and the result is a resistor network. Because it’s not an unreasonable assumption the two pieces of hi-fi equipment in the same rack could share an earth, rather than disappear down the rabbit hole of infinite meshes of resistors  it’s probably safe instead to think of it as something closer to the familiar Pi network attenuator. There are plenty of online calculators that could give you a performance figure for a given network, but in this case with so many approximations and carrot-related guesses their results would be rather meaningless. All that we need to know is that there will be some attenuation of any audio fed into the carrot.

Crisp Treble, and a Crunchy Midrange

The square wave performance of a carrot
The square wave performance of a carrot

Having discussed the theory, it’s time to move onto the practice. Standing in for a high-end audio source was my phone playing YouTube videos, and for a high-end hi-fi a set of amplified computer speakers. Surprisingly it worked, but unsurprisingly in doing so there was a noticeable attenuation that cut the volume by around half. Exactly as expected, but there was a further step of taking a look at it with a ‘scope.

Applying a handy 1kHz squarewave a 30% attenuation was immediately obvious (as well as that maybe the secret lab’s ‘scope probes needed adjusting). We lacked an audio analyser to measure the harmonic distortion of the coupling, but there has to come a point at which characterising a vegetable comes to an end.

So, we’ve proved the original story to be true, you can use a carrot in an audio interconnect. But how would we describe its sound? The answer if you are fond of audiophile reviews is that it adds an organic feel to the broader soundstage, with crisp treble notes, a crunchy midrange, and deep, earthy bass tones. Meanwhile if you are simply looking for something to connect two cables, we’d suggest a carrot sounds better in the roasting pan.

Header image: Sajetpa [CC BY-SA 3.0]

by Jenny List at January 06, 2020 06:01 PM

December 29, 2019

Ardour: 20th birthday

@paul wrote:

It’s hard to pinpoint the precise day that a project like Ardour started. But if that’s the goal, then right now, December 28th 1999 is probably about as good a date as any. That means that today Ardour is 20 years old.

It’s hard to write this, let alone read it. If you had told me back in the last few days of 1999 that I was starting something that would last (at least) two decades, I really do not think I would have believed you. But I did start, and it has lasted, and so I thought that it would be worthwhile to put down on paper the story of how it started. I’ve told this story in person to countless people, and even described it at several conferences and meetings, but I’d like this version to be considered definitive.


In the winter of 1996, I left to become a stay-at-home parent to my 1 year old daughter. By 1998 her mother, worried that I was vanishing into what I lovingly referred to as “the zen of parenting” started to encourage me to pick up some of my own hobbies to balance the time spent with a very young child. I decided to start ultra-distance cycling again, something I had enjoyed before she was born (and was quite good at, which helps). I also decided to pursue my lifelong interest in electronic music, but this time by trying to make music myself. After a brief consultation with a student I had known from the University of Washington who was into some of the same sort of music as I was thinking of making, I ended up buying an Oberheim Matrix 6 synthesizer, a Doepfer MAQ16 sequencer and an Alesis FX unit. It didn’t take long before I started to realize that I would probably benefit from having a computer as part of whatever process I was developing.

Strange as it might seem, since I had already been a programmer since the mid-1980s, this was a troubling thought, because I had avoided having a computer at home until that time. I did have an X Terminal at home for a while during the period that I worked for Amazon, but this didn’t really count as a computer in the conventional sense. There was the dilemma that a Mac was clearly overpriced and I had taken a vow (yes, really) in the 1980s that I would never use Windows. While at Amazon, I had pushed for us to use Linux (Slackware) as the basis for a machine that we called “CC Motel” (as in “Credit Cards check in but they don’t check out”, a riff on a popular meme), and I had been working on Unix-based systems since 1986, so the idea of setting up a Linux machine seemed like an obvious one.

I ended up buying a second hand 486, deliberately staying well behind the leading edge so as to discourage me from doing more on the machine, and picked up a Turtle Beach Tropez+ audio interface for it. I installed an early version of Red Hat on the machine, eager to use a program I had found called “Multitrack” which promised to be a multitrack, multichannel digital audio workstation, or something like that.

It turned out that there was no device driver for the Tropez+ (I think there was for the Tropez, and I had assumed they were similar, driver-compatible hardware). I had written device drivers before, and wasn’t too intimidated by the idea of needing to do this. It didn’t take too long, although I also wrote a patch editor for the Tropez+ which was of some use since it had a wavetable synth on board. I never used that for anything, as it turned out.

Quite against my original intention, I was programming again, a few years after I thought I had decided to give it up for good. The original plan had been that my daughter’s mother would finish her post-doc in Philadelphia, we would all move back to the Pacific Northwest, I would become a farmer with the time to slowly bring a small farm into production my way while mom did research and teaching at some PNW institution. It didn’t quite work out like that. By the summer of 1999 we were divorced, and I had already written a couple of MIDI software tools to “help with my music”. The farming idea began to really slip away as that year rolled on, as I recognized that my energy had really been refocused on software, albeit in a wholly new context - music, audio and open source. (The phrase “my energy” refers to whatever I had left over after being the at-home parent for a 3 year old, which was still my primary role in life.)


Sometime during 1999, I became aware of the RME Digi series of audio interfaces. I still wasn’t really making any music, and so like most music gearheads decided that more equipment was obviously the answer. How could having 24 channels of I/O not help my compositional and performance processes? Winfried Rietsch in Vienna had already written a Linux driver for the RME card but his was based on the increasingly obsolete “OSS” audio driver architecture. I decided to take his work and use it as the basis of a driver for the ALSA system, which was more or less established by then as the new audio driver architecture for Linux. RME were helpful and cooperative (as they had been with Winfried), and by sometime in early December of 1999, I had a working ALSA driver for the massively multichannel (by the standards of that era) device.


Which then led to a fundamental problem: I had a working 24 channel I/O device for Linux, but what was I going to do with it? From reading around (Electronic Musician and Sound On Sound magazines in particular), it was obvious that I needed a “Digital Audio Workstation”. It had turned out that “Multitrack” was utterly useless. Another application, “Jazz++” seemed promising but on deeper investigation was also very, very far from doing what a DAW would do. I decided to call the company that made the 800lb gorilla of the DAW world, called “ProTools”. At that time PT was primarily a macOS application that had only recently appeared for Windows (and even then, was only supported on a single piece of Windows hardware).


I managed to get forwarded through a couple of layers at Digidesign and finally found myself talking to someone who seemed to understand what I was talking about. I asked them for the source code and offered to port ProTools to Linux, free of charge. They laughed, and made it clear that this was never, ever going to happen. I remember ending the phone call with an offhand remark to the effect of “oh well, I’ll just write it from scratch on my own”.


My daughter was going to spend New Years with her mother that year, and so on about December 28th 1999, I sat down and started to write what was initially going to be a 24 track hard disk recorder and playback application. It was initially called HDR32/96, the numbers coming from the fact that it recorded 32 bit floating point data to disk, and could function at up to 96kHz sample rates (though to be honest, it didn’t really care). It took me about 3-4 weeks to get this working (borrowing button images from the photographs of the relatively Mackie HDR24 which had just come out!).

So there it was: I could record and playback 24 channels of high quality digital audio on my Linux machine. Exciting, no?


Well, no was the answer. Being able to just record and playback really seemed like a very uninteresting capability the moment it was available. You could not edit the sound in any way at all, which from the perspective of creating music seemed like a total non-starter. I continued to work on polishing aspects of the program, and announced it in the still fairly nascent linux audio community. A young programmer called Taybin Rutkin soon got involved with the project, bringing lots of nice idiomatic C++ aspects to the codebase, and we would talk on the ardour-dev mailing list about what to do about the program’s inability to edit.

There was an exciting extremely powerful and supremely geeky audio editor called “snd”. Its developer, the venerable Bill Schottstaedt, ported it to GTK+ over a single weekend, mostly at our prompting, and this encouraged us to go down the path of merging snd and ardour. snd appeared insanely powerful - this was long before I really understood what “non-linear, non-destructive” editing really meant - and even had a Lisp interpreter builtin which seemed to allow for unimaginable possibilities. Alas, snd stumbled over something much more basic. Like so many audio projects of the time - certainly the open source ones - it assumed that you could either load all the audio into memory, and if not that, then you could read it from disk on-demand without any problems. Neither of these were (or are) true for the sorts of projects I imagined Ardour being useful for, which at that time were imagined to be something like 18GB of data spread across 24 tracks. snd could barely handle 6 tracks at that time, and even then it wasn’t reliable in terms of playback or recording without clicks and pops.


So, somewhere in the middle of 2000, Taybin and I decided that we would just write our own editor for Ardour. “How hard can it be?” we said to ourselves, and got started.


In a few days, it will be 2020. The editor is still a work in progress. Some of the features that Ardour had back in 2000 are announced with great pride in the new releases of other DAWs. Some of the most basic features of ProTools are still not available in Ardour. Some of Ardour’s design has crept into other DAWs without much fanfare, but we know where they got the idea :slight_smile: Somebody starts the program somewhere on earth (at least) every 3 minutes, all day every day. The income from Ardour has grown to levels that are largely unprecedented for niche creation-centric applications in the open source world.


Every day I am thankful for the life that Ardour’s users have allowed me to lead for the last 20 years. Every day I am thankful for the incredible contributions of the nearly 80 other developers that have contributed to the program over the last 20 years. 20 years is a long time for a piece of software to be around, though in the DAW world, perhaps less so. It is notable that Ableton Live began its life at about the same time as Ardour, and has completely changed the zeitgeist of computer-based music production as well as creating jobs for hundreds of people and making a lot of money. By that standard, Ardour hasn’t been that much of a success. But I always remember the night desk man at a little hotel in Paris who told my daughter that his band used Ardour to record themselves every week. I think of the laptops that were distributed into the favelas of Brazil with Linux and Ardour preloaded. I remember the screenshot of Ardour recording the chatter during the Mars lander launch at NASA. And I remember everyone who has made Ardour what it is today. I can’t promise anything about the next 20 years, but I will do everything I can to keep advancing and improving Ardour in small and big ways. If you’ve been here since the beginning, THANK YOU. If you’ve just discovered Ardour, stick around - it should be a fun ride!

Posts: 27

Participants: 26

Read full topic

by @paul Paul Davis at December 29, 2019 02:44 AM

SFZ Format News

Happy new year!

Here we are with the latest relevant updates, the last ones for this year:

  • Added *_mod and *_dynamic opcodes
  • Added Cakewalk SFZv2 opcodes (work in progress) page
  • Added the SFZ test suite for sample instruments developers in homepage
  • Improved SFZ syntax highlighting in Google Prettify for all pages
  • Search now works correctly, though it is slow and needs some more improvements

Happy new year!

by RedTide at December 29, 2019 12:00 AM

December 28, 2019

Qtractor 0.9.12 - The QStuff* Winter'19 Release batch #3

Greetings, just one last time!

Qtractor 0.9.12 (winter'19) is released!

The change-log is rather short but nevertheless:

  • Basic key-signature has been added to tempo, time-signature and location markers map.
  • MIDI Clip editor (aka. piano-roll) horizontal and vertical splitter sizes, widths and heights resp. are now preserved as user preferences and also to session state.
  • Second attempt to fix the yet non-official though CMake build configuration.


Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.


Project page:


Git repos:

Wiki (help wanted!):


Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Happy new year/decade's eve.

Donate to

by rncbc at December 28, 2019 12:00 PM

December 26, 2019

Vee One Suite 0.9.12 - The QStuff* Winter'19 Release batch #2

Greetings, a second time!

The Vee One Suite of old-school software instruments, synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer, drumkv1 as yet another drum-kit sampler and padthv1 as a polyphonic additive synthesizer, are all here and now released, making it for the second QStuff* Winter'19 batch of the season.

All delivered in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

Changes for this special season are pretty much the same for all the gang-of-four:

  • Custom color (palette) theme editor introduced; color (palette) theme changes are now effective immediately, except on default.
  • Second attempt to fix the yet non-official though CMake build configuration.
  • Move QApplication construction/destruction from LV2 UI to plug-in instantiation and cleanup.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.


synthv1 - an old-school polyphonic synthesizer

synthv1 0.9.12 (winter'19) is out!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.



project page:


git repos:


samplv1 - an old-school polyphonic sampler

samplv1 0.9.12 (winter'19) is out!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.



project page:


git repos:


drumkv1 - an old-school drum-kit sampler

drumkv1 0.9.12 (winter'19) is out!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.



project page:


git repos:


padthv1 - an old-school polyphonic additive synthesizer

padthv1 0.9.12 (winter'19) is out!

padthv1 is an old-school polyphonic additive synthesizer with stereo fx

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.



project page:


git repos:


Donate to

Enjoy && Have fun!

by rncbc at December 26, 2019 07:00 PM

December 18, 2019

GStreamer News

GStreamer Rust bindings 0.15.0 release

A new version of the GStreamer Rust bindings, 0.15.0, was released.

As usual this release follows the latest gtk-rs release, and a new version of the GStreamer plugins written in Rust was also released.

This new version features a lot of newly bound API for creating subclasses of various GStreamer types: GstPreset, GstTagSetter, GstClock, GstSystemClock, GstAudioSink, GstAudioSrc, GstDevice, GstDeviceProvider, GstAudioDecoder and GstAudioEncoder.

In addition to that, a lot of bugfixes and further API improvements have happened over the last few months that should make development of GStreamer applications or plugins in Rust as convenient as possible.

A new release of the GStreamer Rust plugins will follow in the next days.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the GitLab

as well as on

If you find any bugs, missing features or other issues please report them in GitLab.

December 18, 2019 05:00 PM

December 16, 2019

JACK Audio Connection Kit News

JACK mailing list is back!

The mailing list for the JACK Audio project is back! You can now find it here.

The archive was restored, but not the subscriber list (due to technical difficulties). You will have to re-subscribe in order to keep using the JACK mailing list.

by falkTX at December 16, 2019 12:00 AM