It's a patcher inside a patcher. And that patcher has more patches that you can copy, randomize, and sequence. And in that patcher is a ton of glitchy goodness. PatchSeq, hot off the grill from Jeremy Wentworth and Voxglitch, is something special.
A new version, SpectMorph 1.0.0-beta3 is available at www.spectmorph.org.
SpectMorph (CLAP/LV2/VST plugin, JACK) is able to morph between samples of musical instruments. A standard set of instruments is shipped with SpectMorph, and an instrument editor is available to create user defined instruments from user samples.
The new features of the 1.0.0 beta releases (compared to the latest stable version) are described in a YouTube Tutorial.
In the beta3 version, the instrument editor has a new pitch detection algorithm and support for mp3 files. Other than that, there were many smaller fixes, some of them addressing critical problems, so we recommend updating.
If you are interested in a detailed list of changes, you can look at the NEWS file.
The GStreamer team is pleased to announce another release of liborc,
the Optimized Inner Loop Runtime Compiler, which is used for SIMD acceleration
in GStreamer plugins such as audioconvert, audiomixer, compositor, videoscale,
and videoconvert, to name just a few.
This release contains both bug fixes and new features.
Highlights:
Initial 64-bit RISC-V support
Add 64-bit LoongArch support
Implement release and reuse of temporary registers for some targets
x86: Implement EVEX encoding and an opcode validation system
x86: Opcode refactor, improved constant handling and various other fixes
x86: add missing rounding operands for AVX and SSE
x86: Implement 64-bit single move constant load
includes: stop exporting the private compiler and OrcTarget definitions
Use hotdoc instead of gtk-doc to generate the documentation
ORC_DEBUG_FATAL environment variable allows abort on log messages of a certain level
Error message improvements and NEON backend clean-ups
Fix a few valgrind issues
Build: enable tools such as orcc and orc-bugreport by default
We've had a deluge of superb VCV Rack modules, carrying straight over into the first days of the new year. But let's skip to this one: Jeremy Wentworth's free Grains is the granular sampler you've been waiting for, now available for VCV Rack and 4ms MetaModule. Bonus: my guide to other granular modules in Rack... and a surprise teaser.
The GStreamer team is excited to announce the first release candidate
for the upcoming stable 1.28.0 feature release.
This 1.27.90 pre-release is for testing and development purposes
in the lead-up to the stable 1.28 series which is now frozen for
commits and scheduled for release very soon.
Depending on how things go there might be more release candidates in
the next couple of days, but in any case we're aiming to get 1.28.0 out
as soon as possible.
Highlighted changes:
Add a burn-based YOLOX inference element and a YOLOX tensor decoder in Rust
Add an audio source separation element based on demuc in Rust
Add new GIF decoder element in Rust with looping support
Add a Rust-based icecastsink element with AAC support
analytics: Improvement to inference elements; move modelinfo to analytics lib; add script to help with modelinfo generation and upgrade
decklinkvideosink: Fix frame duration to be based on the decklink clock
flv: Fix track ID 0 semantics and extended FLV for non multitrack type packets
GstPlay: Add support for gapless looping
input-selector: implements a two-phase sinkpad switch now to avoid races when switching input pads
intersrc: new event-types property to forward upstream events to sink
isomp4mux: Support caps change and add support for raw audio as per ISO/IEC 23003-5
jpegparse: fix handling of JPEGs with HDR gain maps
jsontovtt: add property to enable per-cue line attributes
textaccumulate: implement no-timeout mode for forwarding full sentences
matroskademux: make maximum allowed block size large enough to support 4k uncompressed video
qtdemux: fix various MP4 demuxing issues and regressions
GstValue: The recently-introduced GstSet API was renamed to GstUniqueList
cerbero: add support for Python wheel packaging, fix Windows build with Python 3.14, support system recipes, ship Gtk4 and more plugins
Countless bug fixes, build fixes, memory leak fixes, and other stability and reliability improvements
Binaries for Android, iOS, Mac OS X and Windows will be made available shortly
at the usual location.
All other recommendations that for instance rtcqs or Millisecond give are for those that really need stable, ultra low latency. So buffer sizes below 64 samples that result in round-trip latencies below 10 milliseconds. This is the area where threaded IRQs or disabling Spectre/Meltdown mitigations might contribute to getting rid of that stray xrun.
Regarding threaded IRQs, enabling those by itself doesn’t change anything. You will need to configure those threaded IRQs after you’ve enabled them. Tools that can do this are rtcirqus or rtirq. You could also do this manually by using the chrt command on the threaded IRQ process.
Modern systems use MSI(-X) interrupts though (Message Signaled Interrupts) so shared IRQs should be something of the past. On those systems there’s very little gain in prioritising threaded IRQs.
The main difference between rtcirqus and rtirq is that rtcirqus allows you to set the real-time priority of a thread based on ALSA card names. rtirq works differently, it sets the real-time priority based on kernel module names. So with rtcirqus you can be sure the desired audio interface gets the desired real-time prio, with rtirq you’re prioritising all the devices that make use of a specific kernel module (xhci_hcd, snd_hda_intel).
rtirq does allow for some finer grained control regarding USB2 ports and onboard audio devices that use the snd_hda_intel driver. The USB2 ehci_hcd driver and the snd_hda_intel driver add the bus name and card index number respectively to the IRQ thread process name so you can use that designation in the rtirq configuration file. In case of USB2 you’re still prioritising the IRQ of the whole USB bus though but then rtcirqus does the same.
Refactored Clip/Tempo Adjust.. tempo/beat-detection function to Breakfastquay::minibpm as a submodule, in alternative to the now being deprecated (lib)aubio.
Description:
Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.
A friend of mine is producing a series of HOWTO videos for an open source project, and discovered that he needed a better microphone than the one built into his laptop. Upon searching, he was faced with a bewildering array of peripherals aimed at would-be podcasters, influencers, and content creators, many of which appeared to be well-packaged versions of very cheap genericised items such as you can find on AliExpress.
If an experienced electronic engineer finds himself baffled when buying a microphone, what chance does a less-informed member of the public have! It’s time to shed some light on the matter, and to move for the first time in this series from the playback into the recording half of the audio world. Let’s consider the microphone.
Background, History, and Principles
A microphone is simply a device for converting the pressure variations in the air created by sounds, into electrical impulses that can be recorded. They will always be accompanied by some kind of signal conditioning preamplifier, but in this instance we’re considering the physical microphone itself. There are a variety of different types of microphone in use, and after a short look at microphone history and a discussion of what makes a good microphone, we’ll consider a few of them in detail.
This one is from 1916, but you might have been using a carbon microphone on your telephone surprisingly recently.
The development of the microphone in the late 19th century is intimately associated with that of the telephone rather than the phonograph, as these recording devices were mechanical only and had no electrical component. The first practical microphones for the telephone were carbon microphones, a container of carbon granules mechanically coupled to a metal diaphragm, which formed a crude variable resistor modified by the sound waves. They were especially suitable for the standing DC current of a telephone line and though they are too noisy for good quality audio they continued in use for telephones into recent decades. The ancestors of the microphones we use today would arrive in the early years of the 20th century along with the development of electronic amplification and recording.
The polar pattern of a cardioid microphone. Nicoguaro, CC BY 4.0.
The job of a microphone is to take the sounds surrounding it and convert them into electrical signals, and invariably that starts with some form of lightweight diaphragm which vibrates in response to the air around it. The idea is that the mass of the diaphragm is as low as possible such that its physical properties have a minimal effect on the quality of the audio it captures. This diaphragm will be surrounded by whatever supporting structure it needs as well as any other components such as magnets, and the structure surrounding it will be designed to minimise vibration and shape the polar pattern over which it is sensitive.
Depending on the application there are microphone designs with a variety of patterns, from an omnidirectional when recording a room, through bidirectional figure-of-eight used in some studio environments, to cardioid microphones for vocals and speech with a kidney-shaped pattern, to extremely directional microphones used by filmmakers. Of those the cardioid pattern is the one most likely to find itself in everyday use by someone like my friend recording voice-overs for video.
Having some idea of microphone history and principles, it’s time to look at some real microphones. We’re not going to cover every single type of microphone, instead we’re going to cover the three most common, to represent the ones you are likely to find for affordable prices. These are dynamic microphones, condenser microphones, and their electret cousins.
Dynamic Microphones
A dynamic microphone cartridge.
A dynamic microphone takes a coil of wire and suspends it from a diaphragm in a magnetic field. The diaphragm moves the coil, and thus an audio voltage is generated. The diaphragm will typically be a polymer such as Mylar, and it will usually be suspended around its edge by a folded section in a similar manner to what you may have seen on the edge of a loudspeaker cone. The output impedance depends upon the winding of the coil, but is typically in the range of a few hundred ohms. They have a low level output in the region of millivolts, and thus it is normal for them to connect to some kind of preamplifier which may be built in to a mixing desk or similar. The microphone cartridge pictured is from a cheap plastic bodied one bundled with a sound card. You can see the clear plastic diaphragm, as well as the coil. The magnet is the shiny metal object in the centre.
Capacitor Microphones
The diaphragm of a capacitor microphone cartridge. ElooKoN, CC BY-SA 4.0
A capacitor microphone is, as its name suggests, a capacitor in which one plate is formed by a diaphragm.This diaphragm is usually an extremely thin polymer, metalised on one side.
The sound vibrations vary the capacitance of the device, and this can be retrieved as a voltage by maintaining a constant charge across the microphone. This is typically achieved with a DC voltage in the order of a few hundred volts. Since the charge remains constant while the capacitance changes with the sound, the voltage on the microphone will change at the audio frequency. Capacitor microphones have a high impedance, and will always have an accompanying preamplifier and power supply circuit as a result.
Electret Microphones
The ubiquitous cheap electret microphone capsules. Omegatron, CC BY-SA 2.0.
Electret microphones are a special class of capacitor microphone in which the charge comes from an electret material, one which holds a permanent electric charge. They thus forgo the high voltage power supply for their DC bias, and usually have a built-in FET preamp in the cartridge needing a low voltage supply such as a small battery. The attraction is that electret cartridges can be had for very little money indeed, and that the cheap electret cartridges are of surprisingly high quality for their price.
That’s All Very Well, But Which One Should I Buy?
So yes, even knowing a bit about microphones, you’re still left just as confused when browsing the options. The questions you need to ask yourself aside from your budget then are these: what do I want to use if for, and what do I want to plug it in to? Let’s talk practicalities.
You can’t go too far wrong with a Shure SM58 (Or a slightly inferior copy). Christopher Sessums CC BY-SA 2.0.
There are a variety of different physical form factors for microphones, usually at the cheaper end of the market a styling thing emulating famous more expensive models. Often the ones aimed at content creators have a built-in desk stand, however you may prefer the flexibility of your own stand. There are also all manner of pop filters and other accessories, some of which appear to be more for show than utility.
You will need to ask yourself what polar pattern you are looking for, and the answer is cardioid if you are recording your speech — its directional pattern rejects background noise, and focuses on what comes out of your mouth. You might also think about robustness; are you taking this microphone out on the road? A stage microphone makes a better choice if it will see a hard life, while a desktop microphone might make more sense if it rarely leaves your computer.
In front of me where this is being written is my microphone. I take it out on the road with me so I needed a robust device, plus I like the look of a traditional handheld microphone. The standard stage vocal dynamic microphones is unquestionably the Shure SM58, a robust and high-performance device that has stood the test of time. At £100, it’s out of my price range, so I have a cheaper mic from another well-known professional audio manufacturer that is obviously their take on the same formula. It is plugged in to a high-quality musician’s USB microphone interface, a USB sound card and mixer all-in-one. It serves me well, and if you’ve caught a Hackaday podcast with me on it you’ll have heard it in action.
If you’re not going to invest into an audio interface, you will be looking for something with a built-in amplifier and ADC, and probably something that plugs straight into USB. These are myriad, and the quality varies all over the place. For voice recording, a cardioid pattern makes sense, and an amplifier with low self-noise is desirable. If the amplifier picks up the USB bus noise, move on.
So in this piece I hope I’ve answered the questions of both my friend from earlier, and you the reader. It’s no primer for equipping a high-end studio, but if you’re doing that it’s likely you’ll already know a lot about microphones anyway.
For how crucial whales have been for humanity, from their harvest for meat and oil to their future use of saving the world from a space probe, humans knew very little about them until surprisingly recently. Most people, even in Herman Melville’s time, considered whales to be fish, and it wasn’t until humans went looking for submarines in the mid-1900s that we started to understand the complexities of their songs. And you don’t have to be a submarine pilot to listen now, either; all you need is something like these homemade hydraphones.
This project was done as part of a workshop in Indonesia, and it only takes a few hours to build. It’s based on a piezo microphone enclosed in a small case. A standard 3.5 mm audio cable runs into the enclosure and powers a preamp using a transistor and two resistors. With the piezo microphone and amplifier installed in this case, the case itself is waterproofed with a spray and allowed to dry. When doing this build in places where Plasti-Dip is available, it was found to be a more reliable and faster waterproofing method. Either way, with the waterproofing layer finished, it’s ready to toss into a body of water to listen for various sounds.
Some further instructions beyond construction demonstrate how to use these to capture stereo sounds, using two microphones connected to a stereo jack. The creators also took a setup connected to a Raspberry Pi offshore to a floating dock and installed a set permanently, streaming live audio wirelessly back to the mainland for easy listening, review, and analysis. There are other ways of interacting with the ocean using sound as well, like this project, which looks to open-source a sonar system.
We’ve just tagged the current code as 9.0-rc2- this is the second release candidate for 9.0.
Notably, we are also announcing a string freeze, which means no text that appears in the program’s interface will be changed between now and the release of 9.0. This means that translators can get to work finalizing translations for 9.0 without worrying that there will be more changes to come.
We continue to be in a feature freeze until 9.0 is released - all development work will be on bug fixes and improvements to features already present. We anticipate at least one more -rcN tag before release.
Users interested in testing 9.0 and ensuring the best possible release are invited to test it out from the builds available on nightly.ardour.org (or self-build if you prefer). We would strongly request that no Linux distributions package this or any other release candidate - please wait for us to release 9.0. Please report issues on the bug tracker though design discussion on the forum are now acceptable (if not always ideal).
We are not yet finished with the release notes for 9.0, but to get an overview of what is in this release, you can take a look at the in-progress document . It will be revised and updated as we move through the release process.
Please note that there is still no release date scheduled for 9.0. We anticipate that a wider group of beta-testers will uncover new issues (both bugs and workflow/design issues) that merit fixing before the release.
Notable changes since 9.0-rc1 include:
plugin selector: if neither name nor tag buttons are enabled, include creator in search fields
in pianorolls, allow note-clicks to select in draw mode, just like the editor
SMF import: better handling of insane files
make it possible to do certain basic MIDI editing from a context menu in a pianoroll
fix display of MIDI regions in cue editors even when they do not start at the source start
A lot of people have asked us why Ubuntu Studio comes with a panel on top as the default. For that, it’s a simple answer: Legacy.
When Ubuntu Studio 12.04 LTS (Precise Pangolin) released over 13 years ago, it was released with a top panel by default as that was the default for our desktop envirionment: Xfce.
Fast-forward eight years to 20.10 and Xfce was no longer our default desktop environment: we had switched to KDE’s Plasma Desktop. Plasma has a bottom panel by default, similar to Windows. However, to ease the transition for our long-time users, we kept the panel on top by default, resizing it to be similar to the default top panel of Xfce.
A macOS-Like Layout
With 25.10’s release, we included an additional layout: two panels. One panel is on top with a global menu, and the bottom contains some default applications, a trash can, and a full-screen application launcher. This is a way to feel familiar to those with a similar layout from where they may be coming from, being an operating system for creativity: macOS.
Familiarity and Traditionalism: Windows-like Layout
Starting with 26.04 LTS, we’ll also include one more layout: a bottom, Windows 10-like layout. This is to ease the transition for those coming from Windows, and due to popular request and reports.
Should We Change The Default?
It has been 13 years since we defaulted to a top panel, but is that the right idea anymore?
Right now, on the Ubuntu Discourse, we have a poll to decide if we should change the default layout starting with 26.04 LTS. This will not affect layouts for anyone upgrading from a prior release, but only new installations or new users going forward.
We’ve just tagged the current code as 9.0-rc1 - this is the first release candidate for 9.0.
We are now in a feature freeze until 9.0 is released - all development work will be on bug fixes and improvements to features already present. We anticipate at least one more -rcN tag before release (possibly several), and at some point will announce a string freeze to allow translators to finalize their work for 9.0.
Users interested in testing 9.0 and ensuring the best possible release are invited to test it out from the builds available on nightly.ardour.org (or self-build if you prefer). We would strongly request that no Linux distributions package this or any other release candidate - please wait for us to release 9.0. Please report issues on the bug tracker though design discussion on the forum are now acceptable (if not always ideal).
We are not yet finished with the release notes for 9.0, but to get an overview of what is in this release, you can take a look at the in-progress document. It will be revised and updated as we move through the release process.
Please note that there is still no release date scheduled for 9.0. We anticipate that a wider group of beta-testers will uncover new issues (both bugs and workflow/design issues) that merit fixing before the release.
You can now watch online last weeks Elektronengehirn concert at Piksel 25 Bergen (NO). It was the maximalist version of the concert Hardware with pieces from the same named album: three independent videoprojections (like the 2024 concert in Aarhus (DK)) and quadrophonic sound. The main projection Malte Steiner programmed with the game engine Godot, the side projection comes each from a Raspberry Pi with a C program done with Raylib. Remotecontrol was done from the PureData patch on the main computer via OSC through ethernet cables. Additional sound source was a custom made modular synthesizer system Steiner developed in the past years. This audiovisual concert comes close to his vision of the Gesamtkunstwerk.
My simple single-plugin LV2 host, Jalv, isn't quite sure whether it's a developer utility or polished user program, but in any case, it had become stale in the past few years and needed an update.
Most of those changes are internal and only interesting for those who use it as a basis for larger systems. The internals have been largely rewritten to support various things, but this post isn't about that. This post is about a more obviously stale thing: the Gtk2 interface.
In keeping with the free desktop tradition of constant breakage with reduced functionality, that toolkit is now EOLed, and soon the ability to embed GUIs whatsoever will probably go away. Luckily though, we're not quite there yet, and it's still possible/feasible to embed GUIs in Gtk3 (at least on X11), so things can continue roughly as they were for a while. Gtk2 is EOLed though, which is a problem for distributions, and I have no interest in maintaining code for a dead toolkit, so that frontend is gone entirely in the latest release. This does mean that some plugin GUIs written in Gtk2 will no longer work, but that's inherent to the situation (and why general plugin GUIs shouldn't use Gtk).
This seemed like a good time to update the UI to be a bit more “modern”, particularly since a menu bar has never really made much sense here anyway.
I replaced this with a header bar, which I think does suit plugins better. For example, here's the custom GUI for the LSP Compressor:
As always, there's also generic controls, with a few refinements but still using the same boring stock widgets:
All of the menu items have been moved into a single menu button, which is a pattern I'm sceptical of in general, but it works fine for a very simple application like this. The preset menu can be unwieldy, but that's a whole topic unto itself that I hope to tackle more comprehensively later.
Code-wise, it's long been a problem that the rudimentary (lack of) architecture couldn't easily support the more advanced features people wanted from it. So, I've reworked everything into a more serious application, with a more explicit architecture and communication patterns that make adding new features much easier. As far as the Gtk frontend goes, I've also switched to using more modern APIs like GtkApplication, GAction, and so on. To be fair, these parts are quite nice. Actions are a pretty good model for building accessible GUI applications, and these new APIs encourage doing the right thing.
There's still some areas that need work, but jalv.gtk3 (the version which has a .desktop file and all that) is much closer to being a proper application that integrates with the desktop environment now, and smells less like a hackey program that developers just use to check if their plugin works.
That aside, Jalv is still frequently used from the command-line, and there's a
major QoL improvement there as well: the positional argument now accepts files
and directories, not just plugin URIs. The code will try to figure out what to
do automatically, for example, if a bundle or data file only describes a
single plugin, then that plugin is loaded. Presets can also be passed (by path
or by URI), which will load the appropriate plugin with that preset initially applied. In short, it's more like the “do what I mean” interface many people expect.
It's been entirely too long since the last release, but now that the host libraries and Jalv are up to date with most issues resolved, I'm going to try to do some broader cross-project efforts to address a few things that are a mess across the LV2 ecosystem as a whole, with Jalv serving as a sort of reference implementation.
For now, though, it's just a much better implementation of the same old features.
Jalv 1.8.0 has been released. Jalv (JAck LV2) is a simple host for LV2 plugins. It runs a plugin, and exposes the plugin ports to the system, essentially making the plugin an application. For more information, see http://drobilla.net/software/jalv.
Changes:
Add "quit" console command
Add AppStream metainfo file
Add Qt6 version
Add missing short versions of command line options
Add option to install tool man pages
Add support for advanced parameters in console frontend
Add support for control inputs with time:beatsPerMinute designation
Add support for control outputs with lv2:latency designation
Avoid over-use of yielding meson options
Build Qt UI with -fPIC
Clean up and strengthen code
Clean up command line help output
Cleanly separate audio thread from the rest of the application
Fix Jack latency recomputation when plugin latency changes
Fix clashing command line options
Fix minor memory leaks
Make help and version commands exit successfully
Only send control messages to designated lv2:control ports
Only send position to ports that explicitly support it
Reduce Jack process callback overhead
Remove Gtk2 interface
Remove limits on the size of messages sent from plugin to UI
Remove transport position dumping from Jack process callback
Replace use of deprecated Gtk interfaces
Rework Gtk3 interface into a relatively modern Gtk application
Rewrite man pages in mdoc
Simplify and unify plugin and preset command-line arguments
Switch to external zix dependency
Use Gtk switches instead of checkboxes for toggle controls
Elektronengehirn is going to perform a concert at Piksel festival in Bergen, Norway on the 22. November at Østre. During Piksel Malte Steiner also shows the installation The Tradwives at the exhibition from 20.-23. November.
Introduction Built on jj and fzf, jj-fzf offers a text-based user interface
(TUI) that simplifies complex versioning control operations like rebasing,
squashing, and merging commits. This post will guide you through integrating
jj-fzf into your Emacs workflow, allowing to switch between emacs and jj…
Not much interesting was happening the past few weeks, so this is a multi-week recap. Highlights: release candidates planned for GIMP, Ardour, and FreeCAD; new releases of LSP plugins, new technical preview of Audacity 4.0.
The team is getting ready for the first release candidate of v3.2. This means some interesting features in the works are being postponed till v3.4. One such example is vector masks. Some patches may still come through, though, such as merging paths.
Some neat minor new features merged recently:
Exporting patterns of fill and stroke in vector layers.
The project has been slowly arriving at the first release candidate of version 1.1. There are fewer than 10 release blockers lately, so we may still see the final release in 2025.
At the moment, there are over 300 pull requests, both open and in draft. A huge part of those are scheduled for inclusion in v1.2, which means a busy post-release time.
The Ardour team is getting really close to the first release candidate of v9.0. Upcoming changes include things like much-requested pianoroll windows (see below on the screenshot), a bottom panel editing area for regions and cue clips, cue recording, and various UX/UI improvements.
Most recently, Paul added MIDI note brushing (coming to v9.0, and Robin has been working on a reimplementation of mix tools from Mixbus (probably coming to v9.1 or so).
This is mainly a bugfix update for another recent release, where Vladimir Sadovnikov implemented a Ring-Modulated sidechain plugin series (regular and multiband), A/B preset switching support, integrated loudness metering for Referencer plugin series, and other great new features and improvements.
This is a very exciting and yet not very well-known project that simplifies using global audio effects on Linux, among other things. Wellington Wallace et al. released this new version with a port from GTK4 to Qt/QNL-based user interface.
Other changes include:
Built-in tray icon and menu.
Better echo cancellation.
Various preset improvements.
The last used plugin or tab is now restored when the window is reopened.
For the full list of changes, please see here. The recommended way to install it is from Flathub.
Add new comment