November 12, 2019

Audio – Stefan Westerfeld's blog

liquidsfz 0.1.0

Years ago, I’ve implemented SF2 (“SoundFont”) support for Beast. This was fairly easy. FluidSynth provides everything to play back SF2 files in an easy to use library, which is available under LGPL2.1+. Since integrating FluidSynth is really easy, many other projects like LMMS, Ardour, MusE, MuseScore, QSynth,… support SF2 via FluidSynth.

For SFZ, I didn’t find anything that would be as easy to use as FluidSynth. Some projects ship their own implementation (MuseScore has Zerberus, Carla has its own version of SFZero). Both are effectively GPL licensed. Neither SFZ code would be easily integrated into Beast. Zerberus depends on the Qt toolkit. SFZero originally used JUCE and now uses a stripped down version of JUCE called water, which is Carla only (and should not be used in other projects).

LinuxSampler is also GPL with one additional restriction that disallows usage in proprietary context without permission. I am not a lawyer but I think this is no longer GPL, meaning that you cannot combine this code with other GPL software. A small list of reasons why Carla no longer uses LinuxSampler can be found here:

In any case for Beast we want to keep our core libraries LGPL, which none of the projects I mentioned can do. So liquidsfz is my attempt to provide an easy-to-integrate SFZ player, which can be used in Beast and other projects. So I am releasing  the very first version “0.1.0” today:

This first release should be usable, there are only the most important SFZ opcodes covered.

by stw at November 12, 2019 11:05 AM - LAD

LV2: The good, bad, and ugly

It occurred to me that I haven't really been documenting what I've been up to, a lot of which is behind the scenes in non-release branches, so I thought I would write a post about the general action around LV2 lately. I've also been asked several times about what the long-term strategy for LV2 is, if there should be an "LV3", whether LV* can start to really gain traction as a competitor to the big proprietary formats, and so on.

So, here it is, a huge brain dump on what's good, what's bad, what's ugly, and what I think should be done about it.

The Good

LV2 is different from other plugin standards in several ways. This is not always a good thing (which we'll get to shortly), but there are some things that have proven to be very good ideas, even if the execution was not always ideal:

  • Openness: Obvious, but worth mentioning anyway.

  • Extensibility: The general idea of building an extensible core, so that plugin and host authors can add functionality in a controlled way is a great one. This allows developers to prototype new functionality to eventually be standardised, make use of additional functionality if it is available, and so on. Some problems, like ensuring things are documented, that implementations agree, and so on, get more difficult when anybody can add anything, but this is worth the benefit of not having a standardisation process block getting things done.

  • DSP and UI split: Also obvious in my opinion, but certainly not a universal thing. There are a lot of bad things to be said about the actual state of GUI support, but keeping them separate, with the option to have a pointer to the guts of a plugin instance is the right approach. Having a well-defined way to communicate between GUI and DSP makes it easy to do the right thing. Multi-threaded realtime programming is hard, and plugins dropping out because of GUI activity and so on should not be a thing.

  • Standard implementation between host and plugins (for some things): This is a huge win in reducing the burden on both host and plugin authors, and allows both to rely on certain things being done right. This also makes a location where stronger validation and so on can happen, which we should exploit more. The war between host and plugin authors, trying to make things compatible with the arbitrary behaviour of countless implementations is largely why everyone hates plugins. This doesn't have to be a thing. We haven't actually done well in that area with LV2 (quite the opposite), but having a place to put that code is the right move.

  • Transparent communication: Though you technically can do just about anything with LV2, a "proper" plugin has a transparent control interface that works in a standard way. This gets you all kinds of things for free, like human-readable debug tracing, network transparency, and so on, and also encourages design that's better from a user point of view, like having good host controls for parameters, automation, accessibility, and so on. This is somewhat related to having a DSP and UI split. The benefits of having plugins be controlled in a standard way are endless, as are the awful things that happen when GUIs and audio code aren't forcefully kept at arm's reach.

The Bad

Now to the more interesting part. There are some nice ideas in LV2, and I think an idealised and cleaned up version of it that adheres to the main underlying design principles would be beautiful. In reality, however, LV2 is an atrocious mess in all kinds of ways:

  • Control ports: LV2 uses LADSPA-style control ports, which contain a single float. This is a tricky one to put in the "bad" category, since pragmatically grafting extensibility onto LADSPA is why LV2 has been moderately successful. It had to be that way: we needed working plugins, not a tedious standardisation process that goes nowhere (there's already GMPI for that). That said, control ports are incredibly limiting and that they still exist is an endless source of trouble: they are static, they require buffer splitting for sample accuracy, they can only convey a float, there is no hook to detect changes and do smoothing, and so on. A control protocol (something like MIDI except... good) is the right way to control plugins. Notes and controls and all the rest should be in the same stream, synchronous with audio. It's hard to migrate to such a reality, but there should be one consistent way to control a plugin, and it should be a stream of sample-accurate events. No methods, no threading and ABI nightmares, no ambiguity, just a nice synchronous stream of inputs, and a single run function that reads those and produces outputs.

  • The connect_port method: Another LADSPA-ism. This means that using some signal means the host must call a method on the plugin to connect it first. This is an awful design: it forces both the host and the plugin to maintain more state than is necessary, and it's slow. I have written several plugins that would be completely stateless (essentially pure functions) except the spec requires the plugin to maintain all these pointers and implement methods to mutate them. Inputs and outputs just should be passed to the run method, so all of that goes away and everything is nicely scoped. As far as the basics of the C API are concerned, this is, in my opinion, the most egregious mistake.

  • Turtle: Everyone loves to hate Turtle. It's mostly a nice syntax (if the namespace prefix limitations are very annoying), but it's weird. Worse, people might search for "RDF" and find the confusing W3C trash-fire there. The underlying ideas are good, but that three-letter-acronym should be absolutely eliminated from the spec and documentation. The good thing in LV2 is really just "property-centric design", which can be explained in a simple way anyone can understand. It's more or less just "JSON with URIs" anyway, and nobody ever got fired for using JSON. Speaking of which, syntax-wise, JSON-LD is probably the way to go today. JSON is annoying in different ways, but this would allow LV2 data files to look completely typical to almost any developer, but still have the same meaning and have the same advantages under the hood of a real data model. This could actually be done without breaking anything in practice, but JSON-LD is much harder to implement so I'm not quite there yet. It would also be some work to write the vocabulary (vocabularies?), but it's doable.

  • Lack of quality control: Again a consequence of pragmatic evolution, but the lack of standard quality control has become a real problem. There has been progress made there, with things like lv2lint and lv2_validate, but it's not good enough. The biggest problem with plugins (and plugin hosts) in general is that most of them are just broken. There should be a standard test suite for both, that is as strict as possible, and its use should be strongly "encouraged" at the very least. The above-mentioned existence of standard code in-between hosts and plugins could be useful here, for example, hosts could just refuse to load non-conforming plugins outright.

  • Extension spam: The "standard" extensions are not all very good, or widely supported. They also aren't broken down and organized especially well in some cases. We are at least somewhat stuck with this for compatibility, but it makes things confusing. There are many reasons for this, but in general I think a better thought-out standardisation process, and a "sort of standard" staging ground to put contributions that some implementations agree on but aren't ideal or quite at "recommended standard" yet would help. I'm still not sure exactly how to do this, there's no best practice for such things out there that's easy to steal, but with the benefit of hindsight I think we could do much better.

  • Library spam: The standard host implementation is quite a few libraries. This is a mostly good thing, in that they have distinct purposes, different dependencies, and so on, but in practice it's annoying for packagers, or anyone who wants to vendor it. I think the best approach here is to combine them into a meta-package or "SDK", so libraries can still be properly split but without the maintenance burden. I am working towards this with "lv2kit". It's currently hard for outsiders to even figure out what they need, a one-stop "all the LV2 things" in a single package would help immensely, especially for people outside of the Linux world (where distributions package everything anyway, so nobody really cares).

  • C++ and other language bindings: Plugin interfaces more or less have to be in C. However, outside of POSIXland, nobody wants to actually write C. Virtually the entire audio industry uses C++. Good bindings are important. Python is also nice for some things. Rust would be great, and so on.

The Ugly

These are things that are just... well, ugly. Not really "bad" in concrete ways that matter much, but make life unpleasant all the same.

  • Extensibility only through the URI-based mechanism: In general, extensibility is good. The host can pass whatever features, and plugins can publish whatever interfaces, and everything is discoverable and degrades gracefully and so on. It works. The downside is that there's some syntactic overhead to that which can be annoying. We should have put sizes or versions in structs so they were also extensible in the classical way. For example, the connect_port problem mentioned above could be fixed by adding a new run method, but we can't literally add a new run method to LV2_Descriptor. We would have to make a separate interface, and have the host access it with extension_data, and so on, which makes things ugly. Maybe this is for the best, but ugliness matters. In general there are a few places where we could have used more typical C patterns. Weirdness matters too.

  • Extension organization: The list of specifications is a complete mess. It annoys me so much. I am not really sure about this: in some cases, an extension is a clearly separate thing, and having it be essentially a separate spec is great. In other cases, we've ended up with vaguely related grab-bags of things for lack of anywhere else to put them. I sometimes wonder if the KISS approach of just having one big namespace would have been the right way to go. It would mean less prefixes everywhere at the very least. Maybe we could use some other way of grouping things where it makes sense?

  • Static data: This is a tough one. One of the design principles of LV2 is that hosts don't need to load and run any code to just discover plugins, and information about them. This is great. However, whenever the need for something more dynamic comes along (dynamic ports, say), we don't have any great way to deal with it, because the way everything is described is inherently static. Going fully dynamic doesn't feel great either. I think the solution here is to take advantage of the fact that the data files are really just a syntax and the same data can be expressed in other ways. We already have all the fundamental bits here, Atoms are essentially "realtime-ready RDF" and can be round-tripped to Turtle without loss. My grand, if vague, vision here is that everything could just be the same conceptually, and the source of it be made irrelevant and hidden behind a single API. For example, a data file can say things like (pseudocode alert) <volume> hasType Float; <volume> minimumValue 0.0; <volume> maximumValue 1.0 but a message from a plugin can say exactly the same thing at run time. If the host library (lilv) handled all this nicely, hosts could just do lv2_get_minimum(gain) and not really care where the information came from. I think this is a much better approach than grafting on ever-more API for every little thing, but it would have to be done nicely with good support. I think the key here is to retain the advantages we have, but put some work into making really obvious and straightforward APIs for everything.

  • Overly dynamic URIDs: URIDs are a mechanism in LV2 where things are conceptually URIs (which makes everything extensible), but integers in practice for speed. Generally a URID is made at instantiation time by calling a host-provided mapping function. This is, for the most part, wonderful, but being always dynamic causes some problems. You need dynamic state to talk about URIs at all, which makes for a lot of boilerplate, and gets in the way of things like language bindings (you couldn't make a simple standalone template that gives you an Int atom for an int32_t, for example). I think it would be a good idea to have a static set of URIDs for things in the standard, so that lv2_minimum or whatever is just statically there, but preserve the ability to extend things with dynamic mapping. This is easy enough by adding the concept of a "minimum dynamic URID value", where everything less than that is reserved by the standard. Alternatively, or perhaps in addition, maybe having a standard loader to ease the pain of loading every little thing (like with OpenGL) would help make code cleaner and boilerplate free.

  • The Documentation Sucks: Of course, the documentation of everything always sucks, so you have to take this feedback with a grain of salt, but it's true of LV2. A lot of improvements here are blocked by the specification breakdown being set in stone, but it could be improved. I think the reference documentation is not the problem though, we really need example-driven documentation written as prose. This is a completely different thing to reference documentation and I think it's important to not confuse the two. There has been a bit of work adapting the "book" to be better in this sense, but it's not very far along. Once it's there, it needs to be brought to the forefront, and the reference documentation put in a place where it's clear it's about details. Optics matter.

The Work

I'm sure there are countless things floating around in my mind I've forgotten about at the moment, but that's all that comes to mind at a high level. There are, of course, countless little specific problems that need work (like inventing a control protocol for everything, and having it be powerful but pleasant to use), but I'm only focusing on the greater things about LV2 itself, as a specification family and a project. The big question, of course, is whether LV3 should be a thing. I am not sure, it's a hard question. My thinking is: maybe, but we should work towards it first. It's always tempting to throw out everything and Do It Right, but that never works out. The extensible nature of LV2 means that we can graft better things on over time, until all the various pieces feel right. I see no point in breaking the entire world with a grandiose LV3 project until, for example, we've figured out how we want to control plugins. I am a big believer in iterative design, and working code in general. We can build that in LV2. Maybe we can even do it and end up at more or less LV3 anyway, without causing any hard breakage. To that end, I have been improving things in general, to try and address some of the above, and generally bring the software up to a level of quality I am happy with:

  • Portability: The LV2 host stack has (almost) always been at least theoretically portable, and relatively portable in practice, but it's obvious that it comes from the Linux world and might work elsewhere. I have been doing a lot of work on the DevOps front to ensure that everything works everywhere, always, and no platform is second-class. The libraries live on Gitlab, and have a CI setup that builds and tests on Linux (both x86 and ARM), Windows, and MacOS, and cross-compiles with MinGW.

  • Frequent releases: Another consequence of the many-libraries problem is that releasing is really tedious, and I'm generally pretty bad at making releases. This makes things just feel stale. I've recently almost entirely automated this process, so that everything involved in making a release can be done by just calling a script. Also on the DevOps and stale fronts, I've been moving to automatically generating documentation on CI, so it's always published and up to date. Automating everything is important to keep a project vibrant, especially when maintenance resources are scarce.

  • Generally complex APIs: The library APIs aren't great, and the general situation is confusing. Most authors only need Lilv, but there are these "Serd" and "Sord" things in there that show up sometimes, all work with roughly the same sort of "nodes", but all have different types and APIs for them, and so on. I have been working on a new major version of serd that takes advantage of the API break to make things much simpler, and improve all kinds of things in general. This will be exposed directly in lilv where it makes sense, eliminating a lot of glue, and eliminating the sord library entirely. The lilv API itself is also dramatically bigger and more complicated than it needs to be. At the time, it felt like adding obvious helper methods for every little thing was a good idea, so people can just find lv2_port_get_specific_thing_I_want() which is nice when it's there... except it's not always there. The property-based design of LV2 means that lv2_get(port, specific_thing_I_want) could work for everything (and this ability is already there). This results in situations like people thinking they are blocked by a missing function, and spending a lot of time writing and submitting patches to add them, when the functionality was there all along. It would be easier on everyone if everything just always worked the same general way, and it would make the API surface much smaller which is always nice.

  • Validation: There has been a data validator for a while, but it wasn't great. It didn't, for example, point at the exact position in the file where the error was, you just had to figure that part out. The new version of serd fixes this, so validation errors and warnings use standard GCC format to report the exact position along with a helpful error message, which automatically integrates with almost every editor or IDE on the planet for free.

  • SDK: As mentioned above, I'm working on putting all the "standard" host libraries into a unified "lv2kit" which is the one package you will need to build LV2 things. There are still some details about this I haven't sorted out (e.g. should the spec be in there or not? What about non-LV2-specific libraries like serd? Optional vendoring?), but it's coming along and I think will make it far more realistic to expect people to implement LV2.

  • The spec mess: I am idly thinking about whether or not it would be possible to add a compatibility mechanism to allow us to move URIs without breaking anything. It's largely superficial, but cleaning up the specification list would really help the optics of the project if nothing else. 90% here is trivial (just aggressively map everything forwards), but all the corner cases still need to be thought out.

That's all the work in the trenches going on at the moment to improve the state of LV2. Though I wish I, or anyone else, had the time and energy to invest effort into addressing the more ambitious questions around the plugin API itself, at the moment I am more than tapped out. Regardless, I think it makes sense to get the current state of things in a form that is moving forward and easier to work with, and raise the quality bar as high as possible first. With a very high-quality implementation and extensive testing and validation, I'll feel a lot more confident in addressing some of the more interesting questions around plugin interfaces, and perhaps someday moving towards an LV3.

On that note, feedback is always welcome. Most of the obvious criticism are well-known, but more perspectives are always useful, and silent wheels get no grease. Better yet, issues and/or merge requests are even more welcome. The bus factor of LV2 isn't quite as bad as it seems from the web, but it would help to get more activity on the project itself from anyone other than myself. The standards for API additions and such are pretty high, but there's plenty of low-hanging fruit to be picked.

by drobilla at November 12, 2019 02:35 AM

November 11, 2019 - LAD

Jalv 1.6.4

Jalv 1.6.4 has been released. Jalv is a simple but fully featured LV2 host for Jack which exposes plugin ports to Jack, essentially making any LV2 plugin function as a Jack application. For more information, see


  • Support rdfs:label for port groups
  • Use screen refresh rate with Gtk3 and Qt5

by drobilla at November 11, 2019 02:56 AM

November 09, 2019

digital audio hacks – Hackaday

Use Your Earbud’s Media Controls on Your Laptop With This Useful Dongle

[David] sends in his very nicely designed “Thumpware Media Controller” that lets your mobile phone headphones control the media playback on your PC.

We realize that some PCs have support for the extra pins on cellphone earbuds, but at least some of us have experienced the frustration (however small) of habitually reaching up to touch the media controls on our earbuds only to hear the forlorn click of an inactive-button. This solves that, assuming you’re still holding on to those 3.5mm headphones, at least.

The media controls are intercepted by a PIC16 and a small board splits and interprets the signals into a male 3.5mm and a USB port. What really impressed us is the professional-looking design and enclosure. A lot of care was taken to plan out the wiring, assembly, and strain relief. Overall it’s a pleasure to look at.

All the files are available, so with a bit of soldering, hacking, and careful sanding someone could put together a professional looking dongle for their own set-up.

by Gerrit Coetzee at November 09, 2019 09:00 PM

Qtractor 0.9.11 - The Mauerfall'30 Release


Not making history or a revolution but as peaceful as thirty years ago ;)

Qtractor 0.9.11 (mauerfall'30) is out!

And the change-log goes as follows:

  • MIDI Instrument and patch, bank and program names are now correctly updated on their respective track-list (left pane) columns.
  • Avoid copying/replicating dirty MIDI clip files, for yet untitled/scratch sessions.
  • Transport/Backward commands now honoring edit-tail, loop-end and punch-out points, when playback is not rolling.
  • A session name (sub-)directory is now suggested on every new session properties dialog.
  • Avoid adding any extraneous clip replica when Ctrl+dragging on either of its edges.
  • When using autotools and ./configure --with-qt=..., it is also necessary to adjust the PKG_CONFIG_PATH environment variable (after a merge request by plcl aka. Pedro López-Cabanillas, while on qmidinet, thanks).
  • Fixing a potential crash-effect in switching MIDI output buses on tracks that are set to show audio output monitoring meters, is on going still.


Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.


Project page:


Git repos:

Wiki (help wanted!):


Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun.

Donate to

by rncbc at November 09, 2019 07:00 PM

The Linux-audio-announce Archives

[LAA] Work on this weekend

Dear all,

This weekend the VM's will have to be moved to a
different cloud region which could lead to some downtime of services
provided by (i.e. mail and web sites). We'll try to keep
impact as low as possible and are aiming at a seamless migration. This
migration is necessary because the cloud region now lives
in is being phased out. We'll keep you posted about the progress.

Best regards,


by jeremy at (Jeremy Jongepier) at November 09, 2019 05:16 PM

[LAA] Qtractor 0.9.11 - The Mauerfall'30 Release


Not making history or a revolution but as peaceful as thirty years ago ;)

  Qtractor 0.9.11 (mauerfall'30) is out!

And the change-log goes as follows:

- Transport/Backward commands now honoring edit-tail, loop-end and
punch-out points, when playback is not rolling.
- A session name (sub-)directory is now suggested on every new session
properties dialog.
- Avoid adding any extraneous clip replica when Ctrl+dragging on either
of its edges.
- When using autotools and ./configure --with-qt=..., it is also
necessary to adjust the PKG_CONFIG_PATH environment variable (after a
merge request by plcl aka. Pedro Lpez-Cabanillas, while on qmidinet
[8], thanks).
- Fixing a potential crash-effect in switching MIDI output buses on
tracks that are set to show audio output monitoring meters, is on going

  Qtractor [1] is an audio/MIDI multi-track sequencer application
written in C++ with the Qt framework [2]. Target platform is Linux,
where the Jack Audio Connection Kit (JACK [3]) for audio and the
Advanced Linux Sound Architecture (ALSA [4]) for MIDI are the main
infrastructures to evolve as a fairly-featured Linux desktop audio
workstation GUI, specially dedicated to the personal home-studio.


Project page:

- source tarball:
- source package (openSUSE Tubleweed):
- binary package (openSUSE Tubleweed):
- AppImage [7] package:

Git repos:

Wiki (help wanted!):
- static rendering:
- user manual & how-to's:

  Qtractor [1] is free, open-source Linux Audio [5] software,
distributed under the terms of the GNU General Public License (GPL [6])
version 2 or later.


 [1] Qtractor - An audio/MIDI multi-track sequencer

 [2] Qt framework, C++ class library and tools for
     cross-platform application and UI development

 [3] JACK Audio Connection Kit

 [4] ALSA, Advanced Linux Sound Architecture

 [5] Linux Audio consortium of libre software for audio-related work

 [6] GPL - GNU General Public License

 [7] AppImage, Linux apps that run anywhere

 [8] QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

See also:

Enjoy && Keep the fun.
rncbc aka. Rui Nuno Capela

by rncbc at (Rui Nuno Capela) at November 09, 2019 10:06 AM

November 08, 2019


Rewriting large parts of Beast and Bse

Last Tuesday Beast 0.15.0 was released. This is most probably the last release that supports the Gtk+ Beast UI. We have most of the bits and pieces together to move towards the new EBeast UI and a new synthesis core in the upcoming months and will get rid of a lot of legacy code along the way. For a…

November 08, 2019 11:21 AM

November 06, 2019

GStreamer News

GStreamer Conference 2019 talk recordings online

Thanks to our partners at Ubicast the recordings of this year's GStreamer Conference talks are now available online.

You can view or download the GStreamer Conference 2019 videos here.


Lightning Talks:

November 06, 2019 02:00 PM

October 31, 2019

Vee One Suite 0.9.11 - A Halloween'19 Release


The Vee One Suite of old-school software instruments, synthv1, as a polyphonic subtractive synthesizer, samplv1, a polyphonic sampler synthesizer, drumkv1 as yet another drum-kit sampler and padthv1 as a polyphonic additive synthesizer, are here and now released for the global Halloween evening, and all making up to this mythical version 0.9.11 ;)

As always, all provided in dual form:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

Changes for this creepy release are quite revolutionary;)

  • When using autotools and ./configure --with-qt=..., it is also necessary to adjust the PKG_CONFIG_PATH environment variable (after a merge request by plcl aka. Pedro López-Cabanillas, while on qmidinet, thanks).
  • Upstream packaging is now split to JACK standalone and LV2 plugin only: the former shared common core and UI package is now duplicated but statically linked though.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.


synthv1 - an old-school polyphonic synthesizer

synthv1 0.9.11 (halloween'19) released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.



project page:


git repos:


samplv1 - an old-school polyphonic sampler

samplv1 0.9.11 (halloween'19) released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.



project page:


git repos:


drumkv1 - an old-school drum-kit sampler

drumkv1 0.9.11 (halloween'19) released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.



project page:


git repos:


padthv1 - an old-school polyphonic additive synthesizer

padthv1 0.9.11 (halloween'19) released!

padthv1 is an old-school polyphonic additive synthesizer with stereo fx

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.



project page:


git repos:


Donate to

Enjoy && Have (lots of) fun!

by rncbc at October 31, 2019 06:00 PM

Development Update, November 2019

@paul wrote:

Many people have asked about an update to the post from June 2018. I’m sorry things have been so quiet here regarding development, but be assured that plenty has been going on, at least from a programming perspective.

Development was definitely impacted by Paul’s move to near Santa Fe, New Mexico, but now that is largely done and settling in there is well underway, he is now back in action. Here’s a photo of the current state of the new Ardour HQ, in Galisteo, NM (soon to be radically different due to a new self-built desk/console).

Robin, as usual, has been insanely active working on an almost countless series of features and bug fixes. This summer, he redesigned our processing code to use lock-free queues, an important improvement for our realtime code. In the recent past, he added progress notification for Lua scripts execution and introduced support for new LV2 extensions (backgroundColor, foregroundColor, and scaleFactor) that allow a host to inform plugins on host color theme and UI scale factor to play better with non-default themes and on HiDPI displays. Most recently, Robin has also been nerd-sniped into a very full virtual MIDI keyboard implementation that can be used to deliver complex MIDI to any part of Ardour (it shows up as a port just like a hardware device would, and you can connect it just like a hardware device).

Harrison Consoles sponsored the development of a new plugin manager that provides easy access to favorite plugins and favorite presets. They also collected many more MIDNAM files (used to describe various MIDI equipment and their programs) and tagged hundreds of plugins with semantic information. Ben Loftis at Harrison also made some useful changes to the information presented in the Source list (with more planned for the future).

Since the last update, there have been several significant development branches undertaken. The first two don’t have much impact for users in terms of visible functionality, but make ongoing development easier. The first added a formal design known as a “Finite State Machine” to help manage transport state. Before this, it was more or less impossible to explain or logically reason about the state of the transport (start, stop, locate etc.) We now have a much cleaner implementation here that allows us to think more clearly about how this all works (and it is a lot more complex than you would imagine!).

The second development branch to be worked on was a similar “logic cleanup” of the entire startup process. This too was a huge mess at the code level before, and it was extremely hard to reason about where things happened and why. If you wanted to change them, even in a small way, it was a very daunting task. Even fixing a bug such as “why doesn’t the window close button work with this dialog?” was a deep headscratcher. Although the startup process should be identical to the way it has been for 4.x and 5.x, internally it’s now much cleaner and more understandable, again making future changes easier.

Another branch changed the way that Ardour handles timecode (MTC, LTC etc). This is now done by dedicated objects that run all the time during the life of the program (they can be disabled, of course). This means that you can see the current time data being delivered by an MTC or LTC source at all times, regardless of whether you are actually using it. There’s a much more powerful GUI for presenting this data and choosing (and naming) timecode sources. You can also have multiple timecode sources of the same type - not particular useful for a typical home studio setup, but if you’re working with lots of video gear, quite handy.

Finally, the most recent development has been to completely change how we handle managing MIDI data for playback. We have eventually concluded that although theoretically MIDI data could be as large as audio data, in practice it is never even close in size. Since MIDI was first added, we have used the same design as for audio to move data from disk into a track or bus and then on out of the program (when appropriate). This has turned out to be overly complex and unnecessary. We do still have a data structure model for MIDI that is designed specifically for editing. But for playback, we now “render” a MIDI track into a very simple form whenever it is changed, and use this in-memory representation directly. Although it was not the original intent, this should help various MIDI related issues, because the entire playlist for the track is rendered at once, using a single starting point (zero). We are hopeful it will fix some problems with missing and stuck notes.

Len Ovens has been doing some cool work on “foldback busses”. Foldback is a slightly obscure term for what is more typically called “monitor mixes” - sending performers in-ear or on-stage submixes for them to listen to while performing. You’ll be able to do very sophisticated monitor mix configurations in 6.0.

What You’ve Really Been Waiting For

The big news, however, is that we are now getting very close to an alpha state. There are a few architectural issues still to solve, but we don’t plan to do any more feature development before 6.0.

We expect there to be many, many subtle bugs because of all the changes we’ve made to basic architecture over the last 2 years and more. We will be asking for as much help as possible (though not too much - this is still a very small development group!) to discover, analyze and resolve these bugs. Obviously we test along the way, but our testing is guaranteed not to have the wide coverage that our user community can provide.

If things go well with the remaining architectural issues, we might get to an alpha version within a couple of weeks. From there, it’s hard to say how long until the actual release of 6.0 - that will depend on the magnitude and scope of the bugs discovered during testing. But we are modestly optimistic that at least a beta version of 6.0 might appear before the end of this year.

Posts: 42

Participants: 19

Read full topic

by @paul Paul Davis at October 31, 2019 05:08 PM

October 24, 2019


more performanceart in October

This Saturday in Aalborg:

Tina Mariane Krogh Madsen will continue her collaboration with artist Sall Lam Toro and their performance DECONSTRUCTING, DISTORTING, QUEERING DREAMS at the Fertilizer Festival at Studenterhuset in Aalborg (DK) on October 26th. 2019.
DECONSTRUCTING, DISTORTING, QUEERING DREAMS interweaves performance art, interaction, video, sound and movement, with live coding and online streaming. The project is rooted in the breakdown of stereotypical notions of place, their heritage, codes, and environment. It wishes to shift the perception of center and periphery through the creation of art with distribution as strategy. The project has an experimental format that links more sites together. These interactions are streamed live into Studenterhuset’s great concert hall and merges with Madsen's sonic universe. The performance has a duration of 60 minutes.

The event is curated by Danish Vaishyas, Bait and Kamilla Mez.

by herrsteiner ( at October 24, 2019 07:05 PM

October 17, 2019

News – Ubuntu Studio

Ubuntu Studio 19.10 Released

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 19.10, code-named “Eoan Ermine”. This marks Ubuntu Studio’s 26th release. This release is a regular release and as such, it is supported for 9 months. For those requiring longer-term support, we encourage you to install Ubuntu Studio 18.04 “Bionic Beaver” and add […]

by eeickmeyer at October 17, 2019 05:26 PM

digital audio hacks – Hackaday

Worried About Bats in your Belfry? A Tale of Two Bat Detectors

As somebody who loves technology and wildlife and also needs to develop an old farmhouse, going down the bat detector rabbit hole was a journey hard to resist. Bats are ideal animals for hackers to monitor as they emit ultrasonic frequencies from their mouths and noses to communicate with each other, detect their prey and navigate their way around obstacles such as trees — all done in pitch black darkness. On a slight downside, many species just love to make their homes in derelict buildings and, being protected here in the EU, developers need to make a rigorous survey to ensure as best as possible that there are no bats roosting in the site.

Perfect habitat for bats.

Obviously, the authorities require a professional independent survey, but there’s still plenty of opportunity for hacker participation by performing a ‘pre-survey’. Finding bat roosts with DIY detectors will tell us immediately if there is a problem, and give us a head start on rethinking our plans.

As can be expected, bat detectors come in all shapes and sizes, using various electrickery techniques to make them cheaper to build or easier to use. There are four different techniques most popularly used in bat detectors.


  1. Heterodyne: rather like tuning a radio, pitch is reduced without slowing the call down.
  2. Time expansion: chunks of data are slowed down to human audible frequencies.
  3. Frequency division: uses a digital counter IC to divide the frequency down in real time.
  4. Full spectrum: the full acoustic spectrum is recorded as a wav file.

Fortunately, recent advances in technology have now enabled manufacturers to produce relatively cheap full spectrum devices, which give the best resolution and the best chances of identifying the actual bat species.

DIY bat detectors tend to be of the frequency division type and are great for helping spot bats emerging from buildings. An audible noise from a speaker or headphones can prompt us to confirm that the fleeting black shape that we glimpsed was actually a bat and not a moth in the foreground. I used one of these detectors in conjunction with a video recorder to confirm that a bat was indeed NOT exiting from an old chimney pot. Phew!

The Technology

A great example of open source collaboration and iteration in action, the Ardubat was first conceived by Frank Pliquett and then expanded on by Tony Messina and more recently, simplified by Service Kring (PDF).

The Ardubat is a frequency division detector based on a TI CD4024 chip, fed by two LM386 amps. Bat detections are sent to an SD card which can be analysed afterwards to try and get some idea of the species. However, since this circuit works by pre-distorting the analog signal into a digital one and then dividing down, none of the amplitude information makes it through.

BAT DETECTOR 2015, simplified version of Ardubat developed by Service Kring.

The Bat Detector 2015 is again based on the CD4024, but uses a compact four channel amp, the TL074CNE4. Three of the channels feed the frequency divider chip and the fourth is a headphone amplifier. It’s a very neat design and the signal LED is fed directly from the CD4024. It comes as a complete DIY soldering kit for about $10 including postage. Yes …. $10 !!!

One of the biggest limitations with these detectors is the ultrasonic sensors themselves, which typically have a frequency response similar to the curve shown here. More recently, ultra-wide range MEMS SMT microphones have been released by Knowles, which work well right up to 125,000 Hz and beyond! Some bats, most notably the Lesser Horseshoe, can emit calls of up to 115,000 Hz. However, these older style sensors are incredibly good at detecting about 90% of the bats found here in the UK and are much more sensitive than heterodyne detectors.


The ‘professional’ option that I chose was the UltraMic384 by Dodotronics , which uses the Knowles electret FG23629 microphone with a 32-bit integrated ARM Cortex M4 microcontroller, capabable of recording up to 192,000 Hz in the audio spectrum. There are also some good DIY Hacker options such as the Audio Injector Ultra 2 for the Raspberry Pi, which can record at up to 96,000 Hz — but this is not quite good enough for all bats. Be aware that sampling rate is twice the audio frequency which can be quite confusing. An UltraMic sampling at 384 KB/s will record at 192 KHz.

These types of Full Spectrum devices can produce high resolution sonograms, or spectrograms using Audacity software. This is very helpful for wildlife enthusiasts who want to know what the actual bats species is, although even with the best tech, it’s still sometimes very difficult or impossible to determine species, especially within the Myotis genus.

So now we are fully equipped to check for bats in the derelict building using the DIY detector in conjunction with a video camera and a few pairs of human eyeballs. The full spectrum detector will be set to record right through the night and be used to check if there’s any activity we might have missed and tell us at the very least what genus the bats are.

All we need now is some Machine Learning to automatically identify the species. ML is a new frontier for bat detection, but nobody has yet produced a reliable system due to the similarity in the calls of different species. We know neural networks are being applied to recognize elephant vocalizations and the concept should be applicable here. A future project for an intrepid hacker? As for the Ardubat – it’s crying out for a better microphone, if not the expensive FG23629 then the 50 cent Knowles SMT SPU0410LR5H, which also has a great frequency response curve.

[Main image: Myotis bechsteinii by Dietmar Nill CC-BY-SA 2.5]

by Pat Whetman at October 17, 2019 05:01 PM

October 16, 2019


essay "Body Interfaces - Becoming Environment"

Tina Mariane Krogh Madsens essay "Body Interfaces - Becoming Environment" is featured in the hertech issue of Women Eco Artist Dialog, guest edited by Dr. Praba Pilar.

by herrsteiner ( at October 16, 2019 07:02 PM

October 15, 2019

KXStudio News

KXStudio Monthly Report (October 2019)

Hello all, today is October 15th, a Linux/Libre-Audio release day.
I do not have anything to actually release (that is ready anyway), so I thought to instead start something new.

Every month, starting with this one, we will have a monthly report regarding the latest stuff in KXStudio.
This will involve new releases, package updates to its repositories, important bug-fixes and short-term plans.
So let's begin...

First of all, in case you somehow missed it, a new JACK2 release is here!
This finally brings meta-data support into JACK2. More information about meta-data in JACK can be found here.

On the repositories, "helm" package had an issue where the plugin could not find its own presets.
(This was caused due to KXStudio repositories going ahead on renaming "helm" to "Helm" as the former already exists)

ZynAddSubFX got (re-)added, using its nice and fancy Zyn-Fusion UI.
In the old repositories there was "zynaddsubfx" for old UI, and "zynaddsubfx-git" for the new one.
The "git" package is gone, only "zynaddsubfx" is there now and it has the new UI. +1 for progress!

x42-plugins got updated to 20191013 release.

Fluajho, Patroneo and Vico were added. (nice simple tools from Nils Hilbricht)
These last ones were tricky since they use python libraries.
In order to make it a generic package I resorted to cxfreeze which makes it run independent of the system python.

Coming soon is Carla 2.1-beta1.
The actual software is ready for the beta1 release, but setting up the infrastructure for an updated Qt5 build is taking longer than expected.
The current 2.0 builds use quite an old Qt version: Qt5.5 on macOS, Qt4(!) on Linux, which I do not accept for new releases going forward.
Windows builds are ready to go though, you can find test binaries on Carla's github.
Once I finish setting up the builds for Linux and macOS, I will make the announcement. Very likely in mid-November.

Finally, Sonoj is coming!
Sonoj is an annual event/convention in Cologne, Germany, about music production with free and open source software.
It features demonstrations, talks and hands-on workshops.
You can meet like-minded people, learn insider knowledge and tricks, participate in their one-hour production challenge!
It is only a few days from now, so please get ready! :)
I will be doing a talk in Sonoj about the past, present and future of JACK.
So please come and say hi, registration is free!

by falkTX at October 15, 2019 11:47 AM

October 14, 2019

GStreamer News

GStreamer Conference 2019: Full Schedule, Talks Abstracts and Speakers Biographies now available

The GStreamer Conference team is pleased to announce that the full conference schedule including talk abstracts and speaker biographies is now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2019 will take place on 31 October - 1 November 2019 in Lyon, France just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • Raising the Importance of the V4L2 plugin and Challenges
    Nicolas Dufresne, Collabora
  • WebKit-powered HTML overlays in your pipeline with GstWPE
    Philippe Normand, Igalia
  • Detect a metal can using GStreamer/OpenFoodFacts
    Stéphane Cerveau, Collabora
  • A new GStreamer RTSP Server
    Sebastian Dröge, Centricular
  • A brand new documentation infrastructure for the GStreamer framework
    Thibault Saunier, Igalia
  • GStreamer on Windows: Everything New
    Nirbheek Chauhan, Centricular
  • An Improved Latency Tracer
    Nicolas Dufresne, Collabora
  • Using Bots to Improve the Gitlab Workflow
    Jordan Petridis, Centricular
  • GNOME Radio
    Ole Aamot, GNOME
  • SCTE-35 support in GStreamer
    Edward Hervey, Centricular
  • Closed captions, AFD, BAR
    Aaron Boxer, Collabora
  • ...and more to come
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Centricular, Facebook and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Lyon in October! Don't forget to register!

October 14, 2019 06:00 PM

October 11, 2019

Linux – CDM Create Digital Music

Quick! This ffmpeg cheat sheet solves your video, audio conversion needs, for free

Video, audio, convert, extract – once, these tasks were easy with QuickTime Pro, but now it’s gone. ffmpeg to the rescue – any OS, no money required.

It’s Friday, some deadlines (or the weekend) are looming, so seems as good a time as any to share this.

ffmpeg is a free, powerful tool for Mac, Windows, and Linux, with near magical abilities to convert audio and video in all sorts of ways. Even though it’s open source software with a lineage back to the year 2000, it very often bests commercial tools. It does more, better, and faster in a silly number of cases.

There’s just one problem: getting it to solve a particular task often involves knowing a particular command line invocation. You could download a graphical front end, but odds are that’ll just slow you down. So in-the-know media folks invariably make collections of little code bits they find useful.

Coder Jean-Baptiste Jung has saved you the trouble, with a cheat sheet of all the most useful code. And these bear striking resemblance to some of the stuff you used to be able to do in QuickTime Pro before Apple killed it.

19 FFmpeg Commands For All Needs [CatsWhoCode]

And on GitHub:

There are some particularly handy utilities there involving audio, which is where tools like Adobe’s subscription-only commercial options often fail. (Not to mention Adobe is proving it will cut off some localities based on politics – greetings, Venezuelan readers.)

It’s great stuff. But if you see something missing, put it here, and we’ll make our own little CDM guide.

More invaluable cheat sheets

We have a winner:

The above is great for a browse, but with everything covered and an interactive guide, you can’t beat this: – by

Thanks to reader Jim Bell for the tip!

One alternative resource: for community-sourced command line recipes, check commandlinefu, which has a bunch of ffmpeg-related ones (and community up-voting):

Thanks to reader Lenny Mastrototaro for the tip.

Wait, wait, there’s more!

If you’re working with audio, sox – also free and open source and command line – covers some of the areas ffmpeg misses. Thanks to comments for the reminder; I use this all the time.

It also works on Mac, Windows, and Linux – meaning you only need one tool. An in fact, someone has done a cheat sheet for it, too:

A three-platform alternative to ffmpeg is MP4Box. I’d have to do a precise breakdown to work out which capabilities are specific to each MP4Box and ffmpeg, but since they’re both free, you can install them side by side and be ready for any situation. (It might even be worth keeping these on a USB key for emergencies.)

MP4Box isn’t normally downloaded separately but as part of the GPAC open media framework:

Now, they have mercifully integrated all their recipes directly into the documentation, so you don’t need a separate cheat sheet:

Why not use a GUI?

It seems I’m getting this question a lot. I’m not anti-GUI or some kind of command line ninja by any stretch of the imagination. But a GUI causes four problems:

Functionality. The GUI front ends for tools like ffmpeg don’t always cover its full set of features, and they may not be as up to date as the direct ffmpeg build (since they’re maintained separately and unofficially).

Portability. Some of the GUIs are not cross-platform. With the command line, a single workflow works on every OS. And you can even use them on a machine that doesn’t have a windowing environment loaded. Front ends also are more likely to encounter OS version conflicts than command lines.

Scriptability. Command line tools are almost always more easy to script and automate – and again, I’m no ninja; this stuff is sort of Google/DuckDuckGo/StackExchange-able in a few minutes.

Speed. Because of the nature of transcoding, it may well be easier to copy-paste a solution from above than it is to learn how each GUI works and where it has hidden the feature you need. Again, I’m not anti-GUI, but this is a pretty particular use case that really fits the command line. Literally, I bet you could have solved your problem and transcoded in the time it took you to read this section.

There’s just one tool I recommend for cross-platform GUI operation, and that’s the excellent Handbrake. (Some mishaps with VLC have proven to me that it’s a decent player, but not a great transcoder/utility.) Shutter Encoder is new to me, but it has one major advantage as a GUI – it has some previewing capability. Mac/Windows only, though.

More tips? Keep them coming.

The post Quick! This ffmpeg cheat sheet solves your video, audio conversion needs, for free appeared first on CDM Create Digital Music.

by Peter Kirn at October 11, 2019 06:33 PM

October 06, 2019

KXStudio News

JACK2 v1.9.13 release

A new version of JACK2 has just been released.
You can grab the latest release source code at
The official changelog is:

  • Meta-data API implementation. (and a few tools updated with support for it)
  • Correct GPL licence to LGPL for files needed to build libjack.
  • Remove FreeBoB backend (superseded by FFADO).
  • define JACK_LIB_EXPORT, useful for internal clients.
  • Mark jack_midi_reset_buffer as deprecated.
  • Add example systemd unit file
  • Signal to systemd when jackd is ready.
  • Set "seq" alsa midi driver to maximum resolution possible.
  • Fix loading internal clients from another internal client.
  • Code cleanup and various fixes. (too many to mention here, see git log for details)

This release is focused on meta-data support, and this is why it took so long.
There might be odd cases here and there and a few bugs, as it is often the case for all software...
So please make sure to report back any issues!

Special thanks goes to Rui Nuno Capela for the initial pull-request regarding meta-data.
There was some work needed afterwards, but that was the biggest hurdle and motivation needed for a new release. :)

There are still no updated macOS or Windows builds, those will be handled at a later date.
Current plan is to have JACK1 feature-parity first (only a2jmidid and zita internal clients missing now),
and afterwards merging examples/tools and header files to be shared between JACK1 and JACK2.

The situation regarding development of JACK and JACK1 considered legacy has not changed since last release 2 years ago.
See for more information.

PS: I will be in Cologne for Sonoj, giving a talk about "Past, Present and Future of JACK".
There is no registration fee, so please feel free to come by and say hello! :)

by falkTX at October 06, 2019 09:37 PM

September 27, 2019

Talk Unafraid

An evening in the hobby

I’ve gotten into quite a good routine, sequence, whatever you might call it, for my hobby. While it’s an excellent hobby when it comes to complex things to fiddle around with, once you actually get some dark, clear skies, you don’t want to waste a minute, particularly in the UK.

Not having an observatory means a lot of my focus is on a quick setup, but it also means I’ve gotten completely remote operation (on a budget) down pretty well.

I took a decision to leave my setup outdoors some time ago, and bought a good quality cover rated for 365-days-of-the-year protection from Telegizmos. So far it’s survived, despite abuse from cats and birds. The telescope, with all its imaging gear (most of the time), sits underneath on its stock tripod, on some anti-vibration pads from Celestron. I also got some specialist insurance and set a camera nearby – it’s pretty well out of the way and past a bit of security anyway, but it doesn’t hurt to be careful. Setting up outside has been the best thing I’ve done so far, and is further evidence in support of building an observatory!

The telescope, illuminated from an oversize flat frame generator, after a night of imaging.

Keeping the camera mounted means I can re-use flat frames between nights, though occasionally I will take it out to re-collimate if it’s been a while. The computer that connects to all the hardware remains, too – a Raspberry Pi 4 mounted in a Maplin project case on the telescope tube.

This means everything stays connected and all I have to do is walk out, plug a mains extension cable in, bring out a 12V power supply, and plug in two cables – one for the mount, and one for the rest. Some simple snap-fit connector blocks distribute the 12V and 5V supplies around the various bits of equipment on the telescope.

That makes for quite calm setup, which I can do hours in advance of darkness in these early season nights. The telescope’s already cooled down to ambient, so there’s no delay there, either. I’ve already taken steps to better seal up my telescope tube to protect against stray light, which also helps keep any would-be house guests out.

My latest addition to the setup is an old IP camera so I can remotely look at the telescope position. This eliminates the need for me to take my laptop outside whenever the telescope is moving – I can confirm the position of the telescope and hit the “oh no please stop” button if anything looks amiss, like the telescope swinging towards a tripod leg.

I use the KStars/Ekos ecosystem for telescope control and imaging, so this all runs on a Linux laptop which I usually VNC into from my desktop. This means I can pull data off the laptop as I go and work on e.g. calibration of data on the desktop.

A normal evening – PixInsight, in this case looking at some integration approaches for dark frames, and VNC into KStars/Ekos, with PHD2 guiding, and a webcam view of the telescope

So other than 10 minutes at the start and 10 minutes in the early hours of the following morning my observing nights are mostly spent indoors sat in front of a computer. That makes for a fairly poor hobby in terms of getting out of my seat and moving around, but a really good hobby in terms of staying warm!

I do often wander out for half an hour or so and try to get some visual observation in, using a handheld Opticron monocular. Honestly, the monocular isn’t much use – it’s hard to hold steady enough, and low-magnification. Just standing out under the stars and trying to spot some constellations and major stars is satisfying, but I’d quite like to get a visual telescope I can leave set up and use while the imaging rig is doing its thing. That’s a fair bit of time+money away though, and I’d prefer to get the observatory built first. On a dark night, lying down and staring up at the milky way is quite enough to be getting on with.

A typical night, though, involves sitting indoors with the telescope under its cover, and yelling at clouds or the moon (which throws out enough light to ruin contrast on deep space objects).

On that basis I’ve been thinking about other ways to enjoy the hobby that don’t involve dark, clear nights. Some good narrowband filters would let me image on moonlit nights, but run into the many hundreds of pounds, putting a set of Ha/OIII/SII filters around £1k.

Narrowband image, shot in the hydrogen alpha emission line using a Baader 7nm filter – cheap but cheerful – of some of the Elephant’s Trunk Nebula; ~7.5 hours of capture

Making my own telescope, though, struck me as a fun project. It’s something quite frequently done, but the bit that most interested me is mirror making. That’s quite a cheap project (£100 or so) to get started on and should take a few months of evenings, so ought to keep me busy for a while – so that’s the next thing to do. I’ve decided to start with an 8″ f/5 mirror – not only is it quite a small and simple mirror, I could place it directly into my existing telescope without having to spend any more money. I’ve been doing lots of research, reading books on the topic and watching videos from other mirror-makers.

And that is definitely one of the recurring themes throughout the hobby – there’s always something to improve on, and nothing is trivially grasped. Everything takes a bit of commitment and thought. I think that’s one of the reasons I enjoy it so much.

by James Harrison at September 27, 2019 11:14 PM