planet.linuxaudio.org

December 07, 2019

Internet Archive - Collection: osmpodcast

An error occurred

The RSS feed is currently experiencing technical difficulties. The error is: invalid or no response from Elasticsearch

December 07, 2019 03:01 AM

December 05, 2019

Linux – CDM Create Digital Music

Reaper 6 is here – and even more the everyday, budget DAW to beat

It’s got a $60 license for nearly everyone, you can evaluate it for free, and now Reaper – yet again – adds a ton of well-implemented power features. Reaper 6 is the newest edition of this exceptionally capable DAW.

New in this release:

Use effects plug-ins right from the tracks/mixer view. So, some DAWs already have something like a little EQ that you can see in the channel strip visually, or maybe a simple compressor. Reaper has gone further, with small versions of the UI for a bunch of popular plug-ins you can embed wherever you want. That means less jumping in and out of windows while you patch.

You get EQ, filtering, compressor, and more. (ReaEQ, ReaFIR, ReaXcomp, graphical JSFX, etc.)

Powerful routing/patching. The Routing Diagram feature gives you an overview of how audio signal is routed throughout the environment, which makes sends and effects and busing and sidechaining and so on visual. It’s like having a graphical patchbay for audio right inside the DAW. (Or it’s like the ghost of the Logic Pro Environment came back and this time, average people actually wanted to use it. )

Auto-stretch audio. Now, various DAWs have attempted this – you want sound to automatically stretch and conform as you adjust tempo or make complex tempo changes. That’s useful for film scoring, for creative purposes, and just because, well, you want things to work that way. Now Reaper’s developers say they’ve made it easy to do this with tempo-mapped and live-recorded materials (Auto-stretch Timebase). This is one we’ll have to test.

Make real envelopes for MIDI. You can draw continuous shapes for your MIDI control adjustments, complete with curve adjustment. That’s a bit like what you get in Ableton Live’s clip envelopes, as well as other DAWs. But it’s a welcome addition to Reaper, which increasingly starts to share the depth of other older DAWs, without the same UI complexity (cough).

It works with high-density displays on Mac and PC. That’s Retina on Mac and the awkwardly-named HiDPI on PC. But the basic idea is, you can natively scale the default theme to 100%, 150%, and 250% on new high-def displays without squinting. Speaking of which

There’s a new tweakable theme. The new theme is set up to be customizable with Tweaker script.

Big projects and displays work better. The developers say they’ve “vastly” optimized 200+ track-count projects. On the Mac, you also get faster screen drawing with support for Apple’s Metal API. (Yeah, everyone griped about that being Mac-only and proprietary, but it seems savvy developers are just writing for it and liking it. I’m honestly unsure what the exact performance implications are of doing the same thing on Windows, though on the other hand I’m happy with how Reaper performs everywhere.)

And more. ” Dynamic Split improvements; import and render media with embedded transient information; per-track positive or negative playback offset; faster and higher quality samplerate conversion; and many other fixes and improvements.”

Honestly, I’m already won over by some of these changes, and I had been shifting conventional DAW editing work to Reaper as it was. (That is, sure, Ableton Live and Bitwig Studio and Reason and whatever else are fun for production, but sometimes you want a single DAW for editing and mixdown that is none of those others.)

Where Reaper stands out is its extraordinary budget price and its no-nonsense, dead-simple UI – when you really don’t want the DAW to be too creative, because you want to get to work. It does that, but still has the depth of functionality and customization that means you feel you’re unlikely to outgrow it. That’s not a knock on other excellent DAW choices, but those developers should seriously consider Reaper as real competition. Ask some users out there, and you’ll hear this name a lot.

Now if they just finish that “experimental” native Linux build, they’ll really win some nerd hearts.

https://www.reaper.fm

Those of you who are deeper into the tool, do let us know if you’ve got some tips to share.

The post Reaper 6 is here – and even more the everyday, budget DAW to beat appeared first on CDM Create Digital Music.

by Peter Kirn at December 05, 2019 05:47 PM

digital audio hacks – Hackaday

A STM32 Tonewheel Organ Without A Single Tonewheel

The one thing you might be surprised not to find in [Laurent]’s beautiful tonewheel organ build is any tonewheels at all.

Tonewheels were an early way to produce electronic organ sounds: by spinning a toothed wheel at different frequencies and transcending the signal one way or another it was possible to synthesize quite an array of sounds. We like to imagine that they’re all still there in [Laruent]’s organ, albeit very tiny, but the truth is that they’re being synthesized entirely on an STM32 micro controller.

The build itself is beautiful and extremely professional looking. We were unaware that it was possible to buy keybeds for a custom synthesizer, but a model from FATAR sits at the center of the show. There’s a MIDI encoder board and a Nucleo development board inside, tied together with a custom PCB. The UI is an momentary encoder wheel and a display from Mikroelektronika.

You can see and hear this beautiful instrument in the video after the break.

by Gerrit Coetzee at December 05, 2019 04:30 PM

December 03, 2019

GStreamer News

GStreamer 1.16.2 stable bug fix release

The GStreamer team is pleased to announce the second bug fix release in the stable 1.16 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.16.x.

See /releases/1.16/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-vaapi, or gst-omx.

December 03, 2019 08:00 PM

December 01, 2019

The Linux-audio-announce Archives

[LAA] fluidsynth 2.1.0 has been released

The stable version of fluidsynth 2.1 has been released, featuring a
new reverb engine, stereophonic chorus, support for DLS, and more.
Details can be found in the release notes:

Download: https://github.com/FluidSynth/fluidsynth/releases/tag/v2.1.0
API: http://www.fluidsynth.org/api/
Website: http://www.fluidsynth.org


FluidSynth is a real-time software synthesizer based on the
SoundFont(tm) 2 specifications. It can read MIDI events from the MIDI
input device and render them to the audio device. It can also play
MIDI files.

Tom Moebert
FluidSynth Developer Team

by tom.mbrt at googlemail.com (Tom M.) at December 01, 2019 03:56 PM

November 29, 2019

blog4

The Big Crash VR demo video

The Big Crash VR
a virtual reality artpiece based on real estate data and the pending burst of the housing bubble

I uploaded a screencast directly done from the Oculus Quest to demonstrate my new piece, created with Unity. It can be the first time experienced at The Big Crash exhibition in Spanien19c Aarhus in February 2020. For more details check out: 



by herrsteiner (noreply@blogger.com) at November 29, 2019 07:52 PM

November 28, 2019

rncbc.org

Qtractor in the Top 10

PianoNadu is way too kind to list yours truly Qtractor as one of the Top 10 DAW Software of 2019, no matter how many times I say Qtractor is not a DAW but just a sequencer with, yes, some amenable DAW features ;)

Maybe I'm way too humble (and stubbornly short-sighted)... nevermind.

Wholly thanks to Wendell and PianoNadu, nevertheless.

Cheers

by rncbc at November 28, 2019 07:00 PM

November 27, 2019

blog4

performance video online: DECONSTRUCTING, DISTORTING and QUEERING DREAMS



Sall Lam Toro & Tina Mariane Krogh Madsen, DECONSTRUCTING, DISTORTING and QUEERING DREAMS performed at the Fertilizer Festival at Studenterhuset in Aalborg (DK) on October 26. 2019.



by herrsteiner (noreply@blogger.com) at November 27, 2019 01:48 AM

November 26, 2019

digital audio hacks – Hackaday

Microphone Isolation Shield Is A Great IKEA Hack; Definitely Not a Xenomorph Egg

As any content creator knows, good audio is the key to maintaining an audience. Having a high quality microphone is a start, but it’s also necessary to reduce echoes and other unwanted noise. An isolation shield is key here, and [phico] has the low down on making your own.

The build starts with an IKEA lampshade, so it’s a great excuse to head down to the flatpack store and grab yourself some Köttbullar for lunch while you’re at it (that’s meatballs for those less versed in IKEA’s cafeteria fare). This is really more of a powder-coated steel frame than a shade, perfect as the bones of an enclosure. [Phico] hacks it open with a Dremel to make room for the microphone. Cardboard soaked in wallpaper paste is then used to create a papier-mache-like shell, which is then stuffed with acoustic foam. A small opening is left to allow the narrator’s voice to reach the microphone, while blocking sound from other directions. Finally, a stocking is wrapped around the whole assembly to act as an integral anti-pop filter.

It’s a tidy build, and while it looks a bit like a boulder to some, if you encounter a room full of ovomorphs that look just like this, tiptoe right out of there. IKEA hacks are always popular, and this laser projector lamp is a great example. If you’ve got your own nifty Swedish-inspired build, make sure you let us know!

by Lewin Day at November 26, 2019 12:01 AM

November 24, 2019

The Linux-audio-announce Archives

[LAA] B.SEQuenzr 1.2 (LV2 plugin)

Hello,

it's time for the first official version of B.SEQuenzr, a programmable 
multi channel step sequencer.

Key features:
* Selectable pattern matrix size (8x16, 16x16, 24x16, or 32x16)
* Progammability via controls for playing direction, jump, skip, and 
stop options placeable to any position within the matrix
* Autoplay or host / MIDI controlled playing
* Handles multiple MIDI inputs signals (keys) in one sequencer instance
* Use musical scales and / or drumkits
* Scale & drumkit editor
* Notes can be associated with four different, configurable output channels
* Output channels connectable with individual MIDI channels

What's new:
* Do not sync position with host
* 2 new General MIDI-compatible drumkits
* New scale & drumkit editor
* Show cut, copy & paste messages
* Undo, redo, reset
* Flip selection (horizontal, vertical)
* Renamed to B.SEQuenzr (GUI only)
* Show version number
* Sort MIDI output
* Bugfixes, security update

Project page: https://github.com/sjaehn/BSEQuencer/
Releases: https://github.com/sjaehn/BSEQuencer/releases
Instructions: https://github.com/sjaehn/BSEQuencer/wiki

Video: https://www.youtube.com/watch?v=J6bU4GdUVYc

Enjoy and have fun
Sven Jaehnichen

by sjaehn at jahnichen.de (Sven Jaehnichen) at November 24, 2019 11:38 AM

November 22, 2019

Linux – CDM Create Digital Music

Ubuntu Studio hits 19.10, gives you an ultra easy, config-free Linux for music and media

The volunteer-run Ubuntu Studio isn’t just a great Linux distribution for beginners wanting to make music, visuals, and media. It’s a solid alternative to Mac and Windows you can easily dual boot.

Ubuntu Studio for a while had gone semi-dormant; open source projects need that volunteer support to thrive. But starting around 2018, it saw renewed interest. (Uh, maybe frustrations with certain mainstream OSes even helped.)

And that’s important for the Linux ecosystem at large. Ubuntu remains the OS distribution most targeted by mainstream developers and most focused on easy end user operation. That’s not to say it’s the best distro for you – part of the beauty of Linux is the endless choice it affords, rather than a one-size-fits-all approach. But because some package management focuses on Ubuntu (and Debian), because it’s the platform where a lot of the action is as far as consumer desktop OS features, and just because so many beginners are on the platform, it matters. Heck, you can usually get more novice-friendly advice just by Googling a problem and adding the word “Ubuntu” on the end.

But that’s all what you’d hope Ubuntu Studio would be. Let’s talk about what it is – because the latest distro release looks really terrific.

Ubuntu Studio 19.10 dropped last month. For those unOS familiar with Ubuntu – look closely at those numbers – that’s October 2019. Ubuntu alternates between long-term support (LTS) releases and more frequent releases with newer features. Crucially, the Ubuntu Studio team now add “backports” though so that you can use the newer packages on the LTS release – so you don’t have to constantly upgrade your OS just to get the latest features.

If you don’t mind doing the distro update, though, 19.10 has some really terrific features. I also have to say, as a musician the other appeal to me of Linux is, I can still use my main OS as the day-to-day OS, loaded down with lots of software and focusing on things like battery life, while maintaining a dual boot Linux OS both as a backup OS for live use and one I can optimize for low-latency performance. Now that Bitwig Studio, Renoise, VCV Rack, Pure Data, SuperCollider, and lots of other cool software to play live all run on Linux, that’s no small matter. (For visuals, think Blender, game engines, and custom code.)

New in this version:

OBS Studio is pre-configured right out of the box, for live streaming and screencasting.

There are tons of plug-ins ready-to use. 100 plug-ins were added to this release, on top of the ones already available. There are LADSPA, LV2, and VST plug-ins, and extensive support even for Window VSTs. For now, you even get 32-bit plug-in support, so using one of the LTS releases for backwards compatibility on a studio machine is a good idea.

Oh yeah, and while you should definitely move to 64-bit, plug-in developers – targeting Linux now makes sense, without question. And Ubuntu Studio would be a logical distro against which to test or even provide support.

RaySession now makes handling audio sessions for apps easier.

Ubuntu Studio Controls is improved. This won’t make sense to Linux newcomers, but especially for those of you who tried Ubuntu in the past and maybe even got frustrated – Ubuntu Studio has done a lot of work here. Ubuntu Studio Controls and the pre-configured OS now make things work sensibly out of the box, with powerful controls for tweaking things as you need. And yeah, this was indeed sometimes not the case in the past. The trick with Linux – ironically just as on Windows and sometimes even macOS – is that different applications have competing needs for what audio has to do. Ubuntu Studio does a good job of juggling the consumer audio needs with high-performance inter-app audio and multichannel audio we need for our music stuff.

Anyway, new in this build:

  • Now includes an indicator to show whether or not Jack is running
  • Added Jack backend selections: Firewire, ALSA, or Dummy (used for testing configurations)
  • Added multiple PulseAudio bridges
  • Added convenient buttons for starting other configuration tools

That’s just a quick look; you can read the release notes:

I’m installing 19.10 (rather than LTS and backports, though I might do that on an extra machine), as I’m in a little lull between touring. VCV Rack is part of my live rig, as is SuperCollider or Pd for more experimental gigs, so you can bet I’m interested here. I’ll be sure to share how this works and provide a beginner-friendly guide.

For more on how this works:

The post Ubuntu Studio hits 19.10, gives you an ultra easy, config-free Linux for music and media appeared first on CDM Create Digital Music.

by Peter Kirn at November 22, 2019 08:42 AM

November 19, 2019

JACK Audio Connection Kit News

Website infrastructure changes

The jackaudio Website changed its infrastructure from being self-hosted and generated by a script, to being built and hosted by GitHub.

The main reason for this was to easily make the publishing of the static files automatic. Although it is definitely possible to do this without GitHub, why waste time setting up something that already exists?

This means changes made by a pull-request that get merged into master, will immediately be visible in the website, without anyone having to run scripts or click more buttons.

Note that a fallback solution is in place in case something goes wrong with GitHub (which is basically going back to what we had before).

Also, the way to create posts (like these) has changed. The README for the website has more details.

by falkTX at November 19, 2019 12:00 AM

JACK2 v1.9.14 release

A new version of JACK2 has just been released.
You can grab the latest release source code at https://github.com/jackaudio/jack2/releases.

The official changelog is:

  • Fix ARM build
  • Fix mixed mode build when meta-data is enabled
  • Fix blocking DBus device reservation, so it plays nice with others (like PipeWire)
  • Use python3 for the waf build scripts

This release is mainly for the mixed mode and DBus device reservation fixes.
A few distributions need those, and since not everyone is happy taking “random” git commits, a proper release is needed.
So there you go - a new release. :)

by falkTX at November 19, 2019 12:00 AM

November 16, 2019

KXStudio News

KXStudio Monthly Report (November 2019)

Hello everyone, it is time for another monthly report in regards to the KXStudio project.

First, the most important I think, some small repository changes have been made.
I added a "KXStudio" suffix to the repository names, so you get stuff like "KXStudio Plugins" in your package manager now.
This was requested by a user, and makes a lot of sense.
The bad news is that your package manager is likely to complain about the changes, as it thinks it is a sign of trouble.
That is not the case though, as I am here just informing you of that. :)
A quick "solution" to this is to simply delete the cached apt list information, so the package manager will not have the previous repository title, like so:
sudo rm -rf /var/lib/apt/lists/*

There were a few new packages added to the repositories.
First, for the basic infrastructure, we got meson 0.51.2 and premake5. A few projects need this in order to build, so we got to have them first.
The more exciting ones are added and updated application and plugins, the changes on that are:

  • drumgizmo updated to 0.8.1
  • fluajho updated to 1.4.1
  • moony updated to 0.30.0, enabled inline display
  • patroneo updated to 1.4.1
  • vico updated to 1.0.1
  • surge added
  • dragonfly-reverb added
  • hybridreverb2 added
  • wolf-shaper added
  • wolf-spectrum added

Lastly, preparations for the next Carla release are well under way.
I was able to update and build generic Windows and Linux binaries (with Qt 5.9), and macOS is mostly working but still needs some fixing.
In the past I used to do a bunch of beta releases until the final one was declared stable.
I am going against this now, and will directly do a "Release Candidate" where no more new stuff can be added, only bug-fixing.
The next "Linux Audio release day" is January 15, so that will be the target date.

PS: Many of the new packages were imported from the LibraZik project, for which I am extremely grateful for.
The surge armhf build fails at this point, to be fixed soon.

by falkTX at November 16, 2019 11:18 AM

November 15, 2019

Talk Unafraid

Nationalise Openreach?

Disclaimer: I am Chief Engineer for Gigaclear Ltd, a rural-focused fibre-to-the-home operator with a footprint in excess of 100,000 homes in the south of the UK. So I have a slight interest in this, but also know a bit about the UK market. What I’m writing here is my own thoughts, though, and doesn’t in the least bit represent company policy or direction.

Labour has recently proposed, as an election pledge, to nationalise Openreach and make them the national monopoly operator for broadband, and to give everyone in the UK free internet by 2030.

The UK telecoms market today is quite fragmented and complex, and so this is not the obvious win that it might otherwise appear to be.

In lots of European markets there’s a franchising model, and we do this in other utility markets – power being an excellent example. National Grid is a private company that runs transmission networks, and Distribution Network Operators (DNOs) like SSE, Western Power, etc run the distribution networks in regions. All are private companies with no shares held by government – but the market is heavily regulated and things like 100% coverage at reasonable cost is built in.

The ideal outcome for the UK telecoms market would clearly have been for BT (as it was then) never to have been privatised, and for the government to simply decide on a 100% fibre-to-the-home coverage model. This nearly happened, and that it didn’t is one of the great tragedies in the story of modern Britain; if it had, we’d be up at the top of the leaderboard on European FTTH coverage. As it is, we only just made it onto the leaderboard this year.

But that didn’t happen – Thatcher privatised it, and regulation was quite light-touch. The government didn’t retain majority control, and BT’s shareholders decided to sweat the asset they had and invest strategically in R&D to sweat that asset, along with some national network build-out. FTTC/VDSL2 was the last sticking plaster that made economic sense for copper after ADSL2+; LR-VDSL and friends might have given them some more time if the end of copper was still tied to performance.

As it is, enough people have been demonstrating the value of FTTH for long enough now that the focus has successfully shifted from “fast enough” to “long-term enough”. New copper technologies won’t last the next decade, and have huge reliability issues. Fibre to the home is the only long-term option to meaningfully improve performance, coverage, etc, especially in rural areas.

So how do we go about fixing the last 5%?

First, just so we’re clear, there are layers to the UK telecoms market – you have infrastructure owners who build and operate the fibre or copper. You have wholesale operators who provide managed services like Ethernet across infrastructure – people like BT Wholesale. Then you have retail operators who provide an internet connection – these are companies like BT Retail, Plusnet, TalkTalk, Zen, Andrews & Arnold, Sky, and so on. To take one example, Zen buy wholesale services from BT Wholesale to get bits from an Openreach-provided line back to their internet edge site. Sometimes Zen might go build their own network to an Openreach exchange so they effectively do the wholesale bit themselves, too, but it’s the same basic layers. We’re largely talking about the infrastructure owners below.

The issue is always that commercially the last 5-10% of the network in terms of hardest-to-reach places will never make sense to go and do, because it’s really expensive to do. Gigaclear’s model and approach is entirely designed around that last 5%, so we can make it work, but it takes a long-term view to do it. The hard-to-reach is, after all, hard-to-reach.

But let’s say we just nationalise Openreach. Now Openreach, in order to reach the hardest-to-reach, will need to overbuild everyone else. That includes live state-aid funded projects. While it’s nonsense to suggest that state aid is a reason why you couldn’t buy Openreach, it is a reason why you couldn’t get Openreach to go overbuild altnets in receipt of state aid. It’d also be a huge waste of money – billions already spent would simply be spent again to achieve the same outcome. Not good for anyone.

So let’s also say you nationalise everyone else, too – buy Virgin Media, Gigaclear, KCOM, Jersey Telecom, CityFibre, B4RN, TalkTalk’s fibre bits, Hyperoptic, and every startup telecom operator that’s built fibre to the home in new build housing estates, done their own wireless ISP, or in any other way provides an access technology to end users.

Now you get to try and make a network out of that mess. That is, frankly, a recipe for catastrophe. BT and Virgin alone have incredibly different networks in topology, design, and overall approach. Throw in a dozen altnets, each of whom is innovating by doing things differently to how BT do it, and you’ve got a dozen different networks that are diametrically opposed in approach, both at a physical and logical level. You’re going to have no network, just a bunch of islands that will likely fall into internal process black holes and be expensive to operate because they won’t look like 90% of the new operator’s infrastructure (i.e. Openreach’s network) and so require special consideration or major work to make it look consistent.

A more sensible approach is that done in some European countries – introduce a heavily regulated franchising market. Carve the market up to enable effective competition in services. Don’t encourage competition on territory so much – take that out of the equation by protecting altnets from the national operator where they’re best placed to provide services, and making it clear where the national operator will go. Mandate 100% coverage within those franchise areas, and provide government support to achieve that goal (the current Universal Service Obligation model goes some way towards this). Heavier regulation of franchise operators would be required but this is already largely accounted for under Significant Market Power regulations.

Nationalising Openreach within that framework would make some sense. It’d enable some competition in the markets, which would be a good thing, and it’d ensure that there is a national operator who would go and build the networks nobody could do on even a subsidised commercial basis. That framework would also make state aid easier to provide to all operators, which would further help. Arguably, though, you don’t need to nationalise Openreach – just tighten up regulation and consider more subsidies.

This sort of approach was costed in the same report that Labour appear to be using, which Frontier Economics did for Ofcom as part of the Future Telecoms Infrastructure Review. It came out broadly equivalent in cost and outcomes.

But I do want free broadband…

So that brings us to the actual pledge which was free broadband for everyone. The for everyone bit is what we’ve just talked about.

If you’ve got that franchise model then that’s quite a nice approach to enable this sort of thing, because the government can run its own ISP – with its own internet edge, peering, etc – and simply hook up to all the franchise operators and altnets. Those operators would still charge for the service, with government footing the bill (in the case of the state operator, the government just pays itself – no money actually changes hands). The government just doesn’t pass the bill on to end-users. You’d probably put that service in as a “basic superfast access” service around 30Mbps (symmetrical if the infrastructure supports it).

This is a really good model for retail ISPs because it means that infrastructure owners can compete on price and quality (of service and delivery) but are otherwise equivalent to use and would use a unified technical layer to deliver services. The connection between ISPs and operators would still have to be managed and maintained – that backhaul link wouldn’t come from nowhere – but this can be solved. Most large ISPs already do this or buy services from Openreach et al, and this could continue.

There’d still be a place for altnets amidst franchise operators, but they’d be specialised and narrow, not targeting 100% coverage; a model where there is equal competition for network operators would be beneficial to this and help to encourage further innovation in services and delivery. You’d still get people like Hyperoptic doing tower blocks, business-focused unbundlers going after business parks with ultrafast services, and so on. By having a central clearing house for ISPs, those infrastructure projects would suddenly be able to provide services to BT Retail, Zen, TalkTalk, and so on – widening the customer base and driving all the marketing that BT Retail and others do into commercial use of the best infrastructure for the end-user and retailer. This would be a drastic shake-up of the wholesale market.

Whether or not ISPs could effectively compete with a 30Mbps free service is I think a valid concern. It might be better to drop that free service down to 10Mbps – still enough for everyone to access digital services and to enable digital inclusion, but slow enough to give heavier users a reason to pay for a service and so support the infrastructure. That, or the government would have to pay the equivalent of a higher service tier (or more subsidy) to ensure viability in the market for ISPs.

I think that – or some variant thereof – is the only practical way to have a good outcome from nationalising or part-nationalising the current telecoms market. Buying Openreach and every other network and smashing them together in the hopes of making a coherent network that would deliver good services would be mad.

What about free WiFi?

Sure, because that’s a sensible large-scale infrastructure solution. WiFi is just another bearer at some level, and you can make the argument that free internet while you’re out and about should be no different to free internet at home.

The way most “WiFi as a service” is delivered is through a “guest WiFi” type arrangement on home routers, with priority given to the customer’s traffic so you can’t sit outside on a BTWiFi-with-FON access point and stream Netflix to the detriment of the customer whose line you’re using. Unless you nationalised the ISPs too you can’t effectively see this happening.

Free WiFi in town centres, village halls, and that sort of thing is obviously a good thing, but it still works in the franchise model.

How about Singapore-on-Thames?

Well, Singapore opted to do full fibre back in 2007 and were done by about 2012 – but they are a much smaller nation with no “hard to reach” parts. Even the most difficult, remote areas of Singapore are areas any network operator would pounce on.

But they do follow a very similar model, except for the “free access” bit. The state operator (NetLink Trust) runs the physical network, but there are lots of ISPs who compete freely (Starhub, M1, Singtel, etc). They run all the active equipment in areas they want to operate in, and use NetLink’s fibre to reach the home. Competition shifts from the ability to deploy the last mile up to the service layer. This does mean you end up with much more in the way of triple/quad-play competition, though, since you need to compete on something when services are broadly equivalent.

It’s a good example of how the market can work, but it isn’t very relevant to the UK market as it stands today.

Privacy and security concerns

One other thing I’ve heard people talk about today is the concerns around having a government-run ISP, given the UK government’s record (Labour and Tory) of quite aggressively nasty interference with telecoms, indiscriminate data collection, and other things that China and others have cribbed off us and used to help justify human rights abuses.

Realistically – any ISP in the UK is subject to this already. Having the govt run an ISP does mean that – depending on how it actually gets set up – it might be easier for them to do some of this stuff without necessarily needing the legislation to compel compliance. But the message has been clear for the last 5-10 years: if you care about privacy or security, your ISP must not be a trusted party in your threat model.

So this doesn’t really change a thing – keep encrypting everything end-to-end and promote technologies that feature privacy by design.

Is it needed? Is it desirable?

Everyone should have internet access. That’s why I keep turning up to work. It’s an absolute no-brainer for productivity (which we need to fix, as a country) and some estimates from BT came up with in the order of £80bn of value from universal broadband.

Do we need to shake up the market right now? BT are doing about 350k homes a quarter right now and are speeding up, so if you left them to their own devices they’ll be done in at worst about 16-20 years. Clearly they’re aiming for 2030 or sooner anyway and are trying to scale up to that. However, that is almost all in urban areas.

Altnets and others are also making good progress and that tends to be focused on the harder-to-reach or semi-rural areas like market towns.

I think that it’s unlikely that nationalising Openreach or others and radically changing how the market works is something you’d want to do in a hurry. Moving to a better model for inter-operator competition and increasing regulation to mandate open access across all operators would clearly help the market, but it has to be done smartly.

There are other things that would help radically in deploying new networks – fixing wayleave rules is one. Major changes to help on this front have been waiting in the “when Parliament is done with Brexit” queue for over a year now.

There is still a question about how you force Openreach or enable the markets to reach the really hard to reach last mile, and that’s where that £20bn number starts looking a bit pithy. While the FTIR report from Frontier Economics isn’t mad, it does make the point that reaching the really hard to reach would probably blow their £20bn estimate. I think you’d easily add another £10-20bn on to come to a sensible number for 100% coverage in practice given the UK market as it is.

Openreach spend £2.1bn/yr on investment in their network, and have operating costs of £2.5bn/yr. At current run-rate that means you’d be looking at ~£70bn, not £20bn, to buy, operate and build that network using Openreach in its current form. Labour have said £230m/yr – that looks a bit short, too.

(Since I wrote this, various industry people have chimed in with numbers between £50bn and £100bn, so this seems a consistent number – the £230m/yr appears to include capital discounting, so £700m+/yr looks closer)

The real challenge in doing at-scale fibre rollout, though, is in people. Education (particularly adult education and skills development) is lacking, and for the civil engineering side of things there has historically been a reliance on workforces drawn from across the continent as well as local workforces. Brexit isn’t going to make that easier, however soft it is.

We also don’t make fibre in the UK any more. I’ve stood at the base of dusty, long-abandoned fibre draw towers in England, now replaced by more modern systems in Europe to meet the growing demand there as it dwindled here. Almost every single piece of network infrastructure being built in the UK has come from Europe, and for at least a decade now, every single hair-thick strand of glass at the heart of modern networks of the UK has been drawn like honey from a preform in a factory in continental Europe. We may weave it into a British-made cable and blow that through British-made plastic piping, but fibre networking is an industry that relies heavily on close ties with Europe for both labour and goods (and services, but that’s another post).

Labour’s best move for the telecoms market, in my view, would be to increase regulation, increase subsidy to enable operators to go after the hardest-to-reach, and altogether ditch Brexit. Providing a free ISP on top of a working and functional telecoms market is pretty straightforward once you enable the current telecoms market to go after everyone.

by James Harrison at November 15, 2019 11:06 AM

November 14, 2019

drobilla.net - LAD

Cleaning Up the LV2 Extension Mess

After reading my last post, and watching a few old talks around LV2 and so on, I got to thinking about the extension mess problem I mentioned, and it occured to me that there might be some commonality here with the "staging" or "contrib" area question as well.

This is all based on some ideas that have been bouncing around in my head for ages, but that I haven't really developed and certainly not written down, so I'm going to try and sketch out a proposal for how to handle these things without breaking anything.

Concretely, there are two problems here: one is that the spec is just a mess. For example, the Data Access and Instance Access extensions are really just parts of the same thing and should live together, nobody cares about Morph and it's not in a state that really belongs in the "recommended standard" list (sorry, flagrant abuse of power on my part there), and so on.

The other problem is that there are sometimes contributions which solve a problem, and are a reasonable enough pragmatic step, but also not really up to par. Maybe they aren't portable, aren't defined well enough, could do more harm than good if they're presented as recommendations, and so on. People, for whatever reason, want them "in LV2". Yet, nobody has the time to spend to develop them into a more proper specification yet, and nobody is happy when things don't get merged.

It seems there is a common factor to these problems, and it's moving things without breaking anything. To clean up the current mess, we can move extensions to the contrib area. When a previously half-baked contribution is developed further, we can move it from the contrib area. This is an obvious coarse-grained use case; I think there is also a case for finer-grained URI migration, but I'll focus on the easy and most useful case for now.

How might we do this? Though moving instance-access to contrib is not a goal, it's about as simple as an extension gets, so I'll pretend we want to do that for the sake of a simple example. At the very least, it will be a nice little fantasy for me to pretend that the curse of crappy plugin UIs that mess with DSP guts has finally been vanquished for good :) This is just about the mechanism, what we should actually do to clean things up is a question for another time.

So, what's instance-access? It's a handful of URIs, and a feature. The feature is extremely simple, the payload is just some pointer. Can those URIs be moved without breaking anything? For at least this simple case, I think so:

I can't think of a reason this wouldn't work, and it doesn't even require any host changes. It's a bit bloated, but not in a way that matters, and would need a significant (but not too bad) amount of code specifically to deal with this in lilv, but such is my lot in life.

In the more general case, there is also the issue of URID mappings. Let's pretend that http://lv2plug.in/ns/ext/instance-access is mapped to a URID both by the host and the plugin, and that URID is sent between them. Though this isn't really an intended use-case for this particular extension, it's a perfectly valid thing to do:

  • The host URID-map maps both the old and new URIs to the same URID.

... that's it, actually. Regardless of which "version" either host or plugin know about, the URID is identical. This requires hosts to actually implement something though, or for a URI map to be added to lilv, so it's not as easy. It can't just be done in LV2 and would take some time to get established.

There is one remaining snag: extension_data. This one is a bit trickier, because we need to assume the hosts uses lilv_instance_get_extension_data which is just a trivial wrapper, and probably not used by everyone. That's an easy enough fix to make, though. Then, lilv just needs to call the plugin method for the new URI, return that if it isn't NULL, and fallback to calling it with the old URI.

All of this requires a map between old and new to exist, of course, but this would be written down in the specs themselves and it's easy enough to load such a thing inside lilv.

I'm sure there are other places where URIs as strings are used in the API that would need thinking about, and I'll have to scan through the spec to see, but I suspect the above is at least 90% of what matters.

So... am I missing something? Do send me (or lv2-dev) an email if so, but now that I write it down this seems more viable than I assumed it would be. There will definitely be corner cases, since plugins and hosts can use these strings for anything everywhere, but as far as the actual interface is concerned it seems possible to make this happen without too much pain. What could we do with this?

  • Merge data-access and instance-access

  • Merge buf-size and resize-port

  • Put all the "official" extensions in the same namespace ("directory"), and get rid of the annoying inconsistency of ext and extensions and so on (which doesn't really matter, except in the soft sense that ugliness matters). The header includes already look like this and it's so much nicer.

  • We could put the deprecated extensions in a special namespace so they really stand out, but this doesn't seem to really matter (though it should be done visually on the spec page regardless).

  • Move presets into lv2core itself? This isn't an extension-level move like the above, but why not? One less prefix to bother with, and in retrospect, a plugin spec without any kind of presets at all is pretty silly. Perhaps the same for port-groups.

  • Do... something with port-properties, and maybe parameters. Let's say combine them into a "control" extension that generally has all the definition of control related stuff.

  • Move morph to contrib.

  • Maybe move dyn-manifest to contrib. This is a bit more contentious, but it's a pretty ugly solution, and the caveats of using it currently aren't very clear.

That would leave a specification list like this (assuming parameters and port-properties move to "control"):

  • Atom: A generic value container and several data types
  • Buf Size: Access to, and restrictions on, block and buffer sizes
  • Instance Access: Provides access to a plugin instance
  • Log: A feature for writing log messages
  • LV2: An open and extensible audio plugin standard
  • MIDI: A normalised definition of raw MIDI
  • Options: Instantiation time options
  • Control: Common properties and parameters for audio processing
  • Patch: Messages for accessing and manipulating properties with events
  • State: An interface for LV2 plugins to save and restore state
  • Time: Properties for describing time
  • UI: LV2 plugin UIs of any type
  • Units: Meaningful units for values
  • URID: Features for mapping URIs to and from integers
  • Worker: Support for doing non-realtime work in plugins

Not everything left is immaculate, and from a user-facing documentation point of view other things like putting the data-only vocabularies in a separate section might help even more, but I think this would be a big improvement. More importantly, it would of course give us an attic to put slightly more sketchy things. Looking at LV2 as a Specification™, that feels wrong, but looking at it as a project, it seems really necessary.

by drobilla at November 14, 2019 04:05 AM

November 12, 2019

Scores of Beauty

Leopold Mozart: Violin School — (2) LilyPond Structure and Organization

In the previous post I presented the project of a digital edition of Leopold Mozart’s Violin School of 1756, for which I was allowed to create the music examples with LilyPond. Also, I mentioned that I faced many challenges along the way and promised to write about the LilyPond implementation in more detail in another post. Today I will describe some aspects of the overall project organization, along with some additions I applied to openLilyLib. A third post will describe a few selected engraving challenges the project faced, and a final part will deal with some of the additions I implemented for Frescobaldi along the way, which offer potential for further development of that editor.

“Pure Content”

This was the edition of a book, and my part in it was to provide the >600 included music examples. An essential part of such a challenge is to ensure consistency in appearance and behaviour, for which a well-structured and organized code base is required. But a specific extra challenge in this case was posed by the idea of making the music input encoding directly accessible as part of the edition. Therefore the input code had to be clean, not only pertaining to a readable code layout but also to strict separation of content and presentation; there should not be any presentation-layer code in the main input file for each example. Well, actually that wasn’t requested as a hard deliverable, but I felt strongly about the issue because it is one of the central claims about working with LilyPond I have been making over and over again.

When viewing the edition, the TEI XML source for the (whole) edition can be downloaded from a button in the left hand navigation column, and then viewed in the browser or an editor. Within that file each music example is included in a custom <lilypond> element like this:

<lilypond resp="#uliska">
\relative {
  \exampleNumber "13."
  \criticalRemark "Im Druck 3/8. Die ersten drei Takte sind dort in Sechzehnteln notiert.
                   Möglich wäre auch, dass das Beispiel bewusst im schnelleren Metrum
                   gesetzt wurde, dann wäre allerdings der letzte Takt in drei
                   Achtelwerte zu korrigieren"
  \time 3/4
  \key c \major
  \criticalRemark \with {
    message = "Im Druck Sechzehntelbalken"
    item = Beam
  } {
    a''8 \strich [ ( g f e d c ]

    |

    d8 \strich [ c b a g f ]

    |

    g8
    -\criticalRemark "Strich fehlt im Druck."
    \strich [ f e d c b ) ]
  }

  |

  c4 r r

  \doubleBar
}
</lilypond>

As you can see everything in that LilyPond code denotes what something is and not what it looks like or how it is achieved. In a way one could just read that file and know what the example is showing, without any information about the appearance getting in the way. The bold blue commands are LilyPond’s built-in keywords while regular blue points to custom definitions. In this case \criticalRemark refers to a command provided by the scholarLY package while the others are implemented in the project repository. (Right now I am working on factoring out the project’s LilyPond library to a separate repository to make it publicly available. In the next post—when discussing some items of the project library—I will post a link to that repository. The actual content files will also be made public at some point, but that repository will need some more cleaning up before we can do that.)

oll-core.load

In order to get to that point of writing input files like the one listed above, two separate challenges have to be taken. Of course there is the functionality itself: functions handling input like, say, \exampleNumber “13.” had to be written, but in addition to that, I also needed a convenient infrastructure to organize that functionality. Since I knew I’d be dealing with several hundreds of files I didn’t want to load a full code library into every little score; rather, a more modular system was desirable.

To achieve this I created a few functions loading “something”:

  • \loadInclude, a function that implicitly loads an include file if it exists on disk
  • \loadTool, a nice syntax to include tool files, with optional configuration
  • \loadTemplate, a system to load templates for use with the current file’s music variables

These functions could be regarded as “syntactic sugar”, that is: nice-to-have but not essential – but in the course of a large project the small amount of convenience they provide for the individual example can prove invaluable for the work as a whole.

To maximize the usefulness of this effort, I created oll-core.load—a module to the oll-core openLilyLib package—and implemented the functions in that context. openLilyLib is an extension infrastructure for LilyPond with a two-fold approach: On the surface it allows the development (and easy use) of packages providing high-level support for specific problem areas, for example “scholarly annotations”, “bezier curves”, or “notation fonts”. But at the same time it can also be used directly to make use of the built-in functionality that oll-core implements to perform its own duties. Among others, this includes option handling, messaging, handling other packages, or filename handling (a module similar to Python’s os.path).

In order to use openLilyLib, oll-core has to be included the regular LilyPond way, after which packages and modules can be loaded with the custom commands \loadPackage and \loadModule:

\include "oll-core/package.ily"
\loadModule oll-core.load

(Note: while openLilyLib is substantially under-documented, there is some information available on the Wiki for oll-core‘s Github repository.)

\loadInclude

The first thing to implement was a way of “markup-less” file inclusion. It is generally a good idea to move everything related to custom functionality or appearance to separate files, for clarity and for maintainability. This is even more true in a project where the main .ly files are explicitly expected to be aseptically pure. Therefore I implemented a solution that includes a certain file related to the compiled main file – if it exists on disk.

For this oll-core.load provides the function \loadInclude which takes a format string as its argument. It uses the current main file’s output basename to format that string and tries to include the resulting filename. Therefore it was enough to add the following statement to the file init-edition.ily:

\loadInclude "~a-include.ily"

Now if the file 1756_123_4.ly is compiled and an include file 1756_123_4-include.ily exists, it will be loaded automatically. I’ll return to what went into these include files at the end of this post, after the other basic tools have been described.

For those interested in the technical details, the function is implemented here while the underlying function immediate-include is defined in the innermost oll-core internals in file-handling.scm.

\loadTool

It quickly became clear to me that the project would require a lot of commands for specific functionality, and these would be used more than once but still in a limited number of examples. I didn’t want to throw them together in one big library that is included at the top of each input file. I felt it was improper to load the whole library into each single example file when most of the library is only used occasionally. But, while this might not even have a noticeable impact on the performance, I also wanted something more obviously modular for the purpose of documentation. The example files should state explicitly which type of extra functionality they require.

The solution to this idea is the oll-core.load.tools submodule and the function \loadTool. This is related to the \loadInclude command but offers more options and additional functionality.

The command takes a “tool name” as its mandatory argument and includes a corresponding file, taking a root directory into account that has been configured before:

\loadModule oll-core.load.tools
\setOption oll-core.load.tools.directory "/path/to/my/toolbox"
\loadTool mensural-noteheads

This will load the tool “mensural-noteheads.ily” from the directory specified in the option. In the project it was slightly more complex because I used some more oll-core functionality to automatically determine the project’s root directory (in order to make the repository work in arbitrary locations):

\setOption oll-core.load.tools.directory #(os-path-join (append (this-parent) '(toolbox)))

But the function doesn’t stop there. It accepts an optional \with { } block that can be used to pass options to the tool. The options are automatically stored in a parser variable and can be accessed while loading the tool file:

\loadTool \with { right-margin = 7 } score-like-alignment

This loads the tool score-like-alignment, passing it a right-margin option of 7 (interpreted as cm). I used this for the case occurring quite often (searching turns out 186 instances) that multiple examples are to be typeset with equal width for a nice alignment on the page. While this would have been possible by directly setting the paper width in the include file, using the \loadTool approach produces significantly cleaner and more consistent code. This tool is actually a pretty simple one (except for the use of the tool-option), but of course, tools can be as complex as any Scheme-based LilyPond library can get. Some of that will be part of the next post in this series.

\paper {
  right-margin = #(* (toolOption 'right-margin 1) cm)
  ragged-last = ##f
}

(The tool-option function looks up the parser variable right-margin and uses this or the given default value.)

\loadTemplate

The vast majority of examples in the Violin School are simple in their construction—single-voice snippets, each of which can reasonably be represented as a bare music expression. For the remaining examples that needed more complex set-up in a \score expression, I needed to ensure consistency, so I really did not want to define these score blocks within the individual examples. My solution was the oll-core.load.templates submodule with the \loadTemplate function, which is closely related to \loadTool, but with some specific tweaks. I hope that, once my horizons expand beyond the needs of the present, I will discover that this can be generalized even more to make it useful for other situations, but it may also turn out to be basically tailored to the project at hand. Time will tell …

Basically \loadTemplate works like \loadTool, taking a template name and optional options, and looking for the file relative to a given root path. A simple template two-voices might look like

\score {
  <<
    \new Staff <<
      \new Voice = "one" {
        \voiceOne
        \musicOrEmpty one
      }
      \new Voice = "two" {
        \voiceTwo
        \musicOrEmpty two
      }
    >>
    \new FiguredBass = "figures" \musicOrEmpty bassFigures
  >>
}

A template will define the music variables it expects to be populated in advance, i.e. before the template is loaded, here one, two, and bassFigures. However, the module provides the function \musicOrEmpty that will look up a music variable of the given name or return an empty music expression as a fallback. For example, there are very few examples using bass figures, so in most of the cases the corresponding BassFigures context was just providing an empty expression.

I did not make use of the template options machinery in this project, but in other projects I used them to make the instrument name configurable. There are many things that could be implemented, for example passing a list of music variable names or context names, specifying a notation font or whatever seems useful.

You won’t be able to view the use of this in the edition’s XML source because it has been stripped from the exported input files, but a minimal set-up for using a template would look like this:

one = \relative {
  c''
}

two = \relative {
  c'
}

\loadTemplate two-voices

The Include File

I will finish off this post with a few remarks about the typical contents of the include files I created in this edition. This may give a good overview of the amount of manual “post-processing” that was involved in the work. (Hint: surprisingly little; in the vast majority of cases, bare entry of the content was all I needed to do. Which of course is one of the things I love about LilyPond.)

The include file for the example shown at the top of this post is really simple, it just loads a tool and applies two “mods”, in this case manually specifying the stem direction:

\loadTool example-number

\mod 2 0/4 \stemDown
\mod 3 0/4 \stemUp

The example-number tool is the unit where \exampleNumber is defined, so the input files reveal explicitly that this special functionality is used. The \mod function is a wrapper around edition-engraver‘s commands, and I’ll go into more detail about it in the next post.

Two other types of content for the include files are \layout { } or \paper { } blocks to tweak the global appearance of the example, e.g. to specify vertical distances, apply global overrides, or set a fixed number of systems (e.g. force the example on a single system). In many cases an issue arose with tweaks that prove to be used regularly, such as forcing an example on a single system. In these cases it would make sense to wrap them into a “tool”, even if that tool file would consist only of a paper block with a single assignment. However, in a project of that dimension I openly admit that I didn’t always manage to be as consistent as I might have wanted, especially since any such change would have required propagating the modification to the existing code base. On the other hand, one of the niceties of working with tools like LilyPond in a versioned context is that it is rather easy to make such changes even now, without running substantial risks of messing anything up. So I may in the future gradually tidy up the library and the code base.

This is the example I’ve been using throughout this post. \exampleNumber is rendered as the number above the clef; and note the colored annotations (in their “draft” color that was simply changed for the publication through a single-place option setting):

(Click to enlarge)

by Urs Liska at November 12, 2019 11:13 PM

Leopold Mozart: Violin School — (1) The Project

It has been ages since I wrote my last post in this blog – which feels strange given the more-than-weekly activity of the first years. It’s just that I couldn’t spend enough time working on and with LilyPond itself anymore, let alone writing about it. However, right now the publication of a significant work serves as a trigger to finally get to blogging again: A new digital edition of Leopold Mozart’s famous violin school of 1756 which has just been published by the Digital Mozart Edition at https://dme.mozarteum.at/digital-editions/violinschule. I was commissioned to create the music examples for the edition with LilyPond, which is a great acknowledgment of LilyPond’s capabilities and of course something I’m also pretty proud of personally. Besides, it had a few welcome spin-offs in the form of improvements to openLilyLib packages and the Frescobaldi editor, notably a brand-new Extension API and a multi-process job queue. I will split the report about this project into four posts, and this first issue is dedicated to the overall outline of the project.

Leopold Mozart: Versuch einer gründlichen Violinschule (1756)

Leopold Mozart, father of Wolfgang Amadeus Mozart an, and perhaps the best music teacher in Europe at the time, published an influential book in 1756 (accidentally the year of his famous son’s birth): an “attempt at a thorough violin school” (Wikipedia (en), Wikipedia (de)).

The Digital Mozart Edition is a long-running effort to make all of (W.A.) Mozart’s works freely available to the public in a digital online edition. The first step has been to “merely” make the digitized text of the »Neue Mozart-Ausgabe« accessible in a web browser, but the ultimate goal is to also provide interactive access through working with digitally encoded scores, based on the MEI encoding standard and the Verovio rendering engine.

As a side project, texts are being edited and published in online editions, and Leopold Mozart’s violin school was chosen to be part of that effort. With the current release the initial state has been completed but more is to come in the form of renderings of later and translated releases of the work.

Considerations

The Violin School obviously is not sheet music but a text book with interspersed music examples. Therefore, the premise for the online edition was to have a critical digital edition of the text (but not necessarily of the music) encoded in TEI. However, a condition was also set to have the music encoded in a searchable text form that can be made available within the edition.

This ruled out using music examples created with one of the usual graphical notation software applications. The current standard for digital editions would be to encode the music in MEI and have it rendered with Verovio, a C++/JavaScript library that can render music directly within the browser. However, since the music was not intended to be critically edited an effort to encode the music in MEI was considered to be too much overhead, and alternative solutions were evaluated.

That’s where I came into play: Because the export from LilyPond to MusicXML is known to be far from being stable and feature-complete, I was originally requested to give my opinion as to whether the development of LilyPond had now matured enough to support the project. The idea behind this was to use LilyPond’s relatively straightforward language as the “input channel” for the edition but really handle it in converted form. We did some initial tests, and I thought it might be possible to achieve the goal by improving Frescobaldi’s MusicXML export feature along the way, but it quickly became evident that we’d have to expect tons of cases where such fixes and additions would be required. Moreover, I felt wary about the semantic clarity of the result of such an export, especially given that even after a preliminary browsing through the book, I had the impression that there would be pretty much non-standard notation to be dealt with.

Thus it seemed clear that using LilyPond as the input language wouldn’t work out and I already wrote the project off – which was a pity because this would be great publicity for LilyPond, because I love that kind of challenges, and of course because I always need commissions …

Although I don’t recall the exact order of events it somehow came to fruition that I pointed to the caveat that – even with an MEI or MusicXML encoding in place – the issue of rendering the non-standard notation still wouldn’t have been solved, for which I thought there was no tool out there with all the required characteristics. Specifically, Verovio, although it is an extraordinary and exciting tool, is far from being feature-complete and as configurable as LilyPond, and it won’t fully cover standard notation, let alone the needs for a project like this.

So, to cut a long story short, it was agreed upon that I produce all the music examples with LilyPond, the resulting music examples are integrated as PNG or SVG files in the web pages, and that the underlying LilyPond input code is sufficiently “searchable” to fulfill the original requirements, the result of which has now been released, 18 months after the initial contact!

Challenges

The project proved to include formidable challenges of all sorts, but in sum I must say it was a deeply satisfying effort with lots of learning experience and opportunities to add improvements to my (and the public) toolkit. It turned out to be advantageous that we agreed upon a fixed-price contract because this allowed me to spend additional work improving openLilyLib or Frescobaldi without having to worry about a strict accounting of hours. At the end of the day I’m sure I have spent more work than I was paid for, but thankfully that work went into open source development and the overall amount was such that it was OK for me.

The first challenge was to produce input files that are totally clean, with only semantic markup and no “rendering hints” interspersed. To summarize, my approach was a combination of an implicit include file, a “toolbox” implementation with substantial syntactic sugar, and—not the least—the use of the edition-engraver openLilyLib package. My estimate is that I succeeded >99% with that, but unfortunately there have been a few instances where I was forced to add some non-semantic markup to the input files, sometimes because there were some inherent limitations and probably sometimes because I just couldn’t find the magic spell that would have helped me out – despite all the generous assistance by the LilyPond community on the lilypond-user mailing list. However, I am more than pleased how well LilyPond served me through to the end.

The second challenge was a substantial amount of non-standard notation, mostly in the area of textual elements (stemming from the fact that the borders between music examples and the surrounding text are often blurred). This was the type of problems that gave me the most headache, because the challenges were non-standard, because they went across LilyPond’s way of handling things (yes, there are things that LilyPond could – and should – do better), but not the least because of the challenge mentioned first: There is hardly anything you can’t achieve with LilyPond, but it can be weirdly difficult when you have to do it without leaving marks within the input file.

A third challenge was self-inflicted and had not even been initially requested. But since we do have the possibility to annotate music with critical remarks – thanks to my scholarLY openLilyLib package – I suggested to make use of it. So now editorial decisions are annotated in the input files and, so far, visualized by printing the affected element in grey. We intend to exploit the annotations in a more interesting way at some point in the future.

A totally different category of challenges was the managment of the huge repertoire consisting of more than 600 music examples. Creating the content-less starter files, navigating the directory and keeping track of progress would have been absolutely unmanageable using the traditional tools like file explorers or Frescobaldi’s “File Open” dialog. Also, (re)building the resulting image files would have been a time-consuming process using standard tools. Fortunately I found convenient ways to handle these challenges within Frescobaldi, but that’s a story for another post in this little series.

To give you at least some visual teaser in this post I’ll finish with one of the more challenging examples. It looks pretty simple, but actually getting this right involved a lot of LilyPond-whispering (for which I’m very grateful to the mailing list community). Nevertheless, the image (as well as the input file used to achieve it) give a pretty good example of the degree to which I was able to keep the input files focused on “content” over “presentation”:

Example 1756_065_1 (Click to see screenshot from the edition)

upper = {
  \startCenteredHeading "Octav."
  \centered c''1
  \stopCenteredHeading
  \doubleBar

  \startMeasureBracket "Non."
  \centered des''1
  \centered d''!
  \centered dis''
  \stopMeasureBracket
  \doubleBar

  \startMeasureBracket "Decime."
  \centered es''1
  \centered e''!
  \stopMeasureBracket
  \doubleBar

  \originalPageBreak

  \startMeasureBracket "Undecime."
  \centered f''1
  \centered f''
  \centered fis''
  \stopMeasureBracket
  \doubleBar

  \startMeasureBracket "Duodecime."
  \centered g''1
  \centered g''
  \centered gis''
  \stopMeasureBracket
  \bar "||"
}

lower = {
  \centered c'1

  \annotateCenteredMusic \with {
    above = "Kleinere."
  } c'1
  \annotateCenteredMusic \with {
    above = "Grössere."
  } c'1
  \annotateCenteredMusic \with {
    above = "Vergrösserte."
  } c'1

  \annotateCenteredMusic \with {
    above = "Kleinere."
  } c'1
  \annotateCenteredMusic \with {
    above = "Grössere."
  } c'1

  \annotateCenteredMusic \with {
    above = "Verkleinerte."
  } cis'1
  \annotateCenteredMusic \with {
    above = "Reine."
  } c'!1
  \annotateCenteredMusic \with {
    above = "Grosse."
  } c'1

  \annotateCenteredMusic \with {
    above = "Falsche."
  } cis'1
  \annotateCenteredMusic \with {
    above = "Reine."
  } c'!1
  \annotateCenteredMusic \with {
    above = "Vergrösserte."
  } c'1
}

\loadTemplate two-systems

by Urs Liska at November 12, 2019 04:23 PM

Audio – Stefan Westerfeld's blog

liquidsfz 0.1.0

Years ago, I’ve implemented SF2 (“SoundFont”) support for Beast. This was fairly easy. FluidSynth provides everything to play back SF2 files in an easy to use library, which is available under LGPL2.1+. Since integrating FluidSynth is really easy, many other projects like LMMS, Ardour, MusE, MuseScore, QSynth,… support SF2 via FluidSynth.

For SFZ, I didn’t find anything that would be as easy to use as FluidSynth. Some projects ship their own implementation (MuseScore has Zerberus, Carla has its own version of SFZero). Both are effectively GPL licensed. Neither SFZ code would be easily integrated into Beast. Zerberus depends on the Qt toolkit. SFZero originally used JUCE and now uses a stripped down version of JUCE called water, which is Carla only (and should not be used in other projects).

LinuxSampler is also GPL with one additional restriction that disallows usage in proprietary context without permission. I am not a lawyer but I think this is no longer GPL, meaning that you cannot combine this code with other GPL software. A small list of reasons why Carla no longer uses LinuxSampler can be found here: https://kx.studio/News/?action=view&url=carla-20-rc1-is-here

In any case for Beast we want to keep our core libraries LGPL, which none of the projects I mentioned can do. So liquidsfz is my attempt to provide an easy-to-integrate SFZ player, which can be used in Beast and other projects. So I am releasing  the very first version “0.1.0” today: https://github.com/swesterfeld/liquidsfz#releases

This first release should be usable, there are only the most important SFZ opcodes covered.

by stw at November 12, 2019 11:05 AM

drobilla.net - LAD

LV2: The good, bad, and ugly

It occurred to me that I haven't really been documenting what I've been up to, a lot of which is behind the scenes in non-release branches, so I thought I would write a post about the general action around LV2 lately. I've also been asked several times about what the long-term strategy for LV2 is, if there should be an "LV3", whether LV* can start to really gain traction as a competitor to the big proprietary formats, and so on.

So, here it is, a huge brain dump on what's good, what's bad, what's ugly, and what I think should be done about it.

The Good

LV2 is different from other plugin standards in several ways. This is not always a good thing (which we'll get to shortly), but there are some things that have proven to be very good ideas, even if the execution was not always ideal:

  • Openness: Obvious, but worth mentioning anyway.

  • Extensibility: The general idea of building an extensible core, so that plugin and host authors can add functionality in a controlled way is a great one. This allows developers to prototype new functionality to eventually be standardised, make use of additional functionality if it is available, and so on. Some problems, like ensuring things are documented, that implementations agree, and so on, get more difficult when anybody can add anything, but this is worth the benefit of not having a standardisation process block getting things done.

  • DSP and UI split: Also obvious in my opinion, but certainly not a universal thing. There are a lot of bad things to be said about the actual state of GUI support, but keeping them separate, with the option to have a pointer to the guts of a plugin instance is the right approach. Having a well-defined way to communicate between GUI and DSP makes it easy to do the right thing. Multi-threaded realtime programming is hard, and plugins dropping out because of GUI activity and so on should not be a thing.

  • Standard implementation between host and plugins (for some things): This is a huge win in reducing the burden on both host and plugin authors, and allows both to rely on certain things being done right. This also makes a location where stronger validation and so on can happen, which we should exploit more. The war between host and plugin authors, trying to make things compatible with the arbitrary behaviour of countless implementations is largely why everyone hates plugins. This doesn't have to be a thing. We haven't actually done well in that area with LV2 (quite the opposite), but having a place to put that code is the right move.

  • Transparent communication: Though you technically can do just about anything with LV2, a "proper" plugin has a transparent control interface that works in a standard way. This gets you all kinds of things for free, like human-readable debug tracing, network transparency, and so on, and also encourages design that's better from a user point of view, like having good host controls for parameters, automation, accessibility, and so on. This is somewhat related to having a DSP and UI split. The benefits of having plugins be controlled in a standard way are endless, as are the awful things that happen when GUIs and audio code aren't forcefully kept at arm's reach.

The Bad

Now to the more interesting part. There are some nice ideas in LV2, and I think an idealised and cleaned up version of it that adheres to the main underlying design principles would be beautiful. In reality, however, LV2 is an atrocious mess in all kinds of ways:

  • Control ports: LV2 uses LADSPA-style control ports, which contain a single float. This is a tricky one to put in the "bad" category, since pragmatically grafting extensibility onto LADSPA is why LV2 has been moderately successful. It had to be that way: we needed working plugins, not a tedious standardisation process that goes nowhere (there's already GMPI for that). That said, control ports are incredibly limiting and that they still exist is an endless source of trouble: they are static, they require buffer splitting for sample accuracy, they can only convey a float, there is no hook to detect changes and do smoothing, and so on. A control protocol (something like MIDI except... good) is the right way to control plugins. Notes and controls and all the rest should be in the same stream, synchronous with audio. It's hard to migrate to such a reality, but there should be one consistent way to control a plugin, and it should be a stream of sample-accurate events. No methods, no threading and ABI nightmares, no ambiguity, just a nice synchronous stream of inputs, and a single run function that reads those and produces outputs.

  • The connect_port method: Another LADSPA-ism. This means that using some signal means the host must call a method on the plugin to connect it first. This is an awful design: it forces both the host and the plugin to maintain more state than is necessary, and it's slow. I have written several plugins that would be completely stateless (essentially pure functions) except the spec requires the plugin to maintain all these pointers and implement methods to mutate them. Inputs and outputs just should be passed to the run method, so all of that goes away and everything is nicely scoped. As far as the basics of the C API are concerned, this is, in my opinion, the most egregious mistake.

  • Turtle: Everyone loves to hate Turtle. It's mostly a nice syntax (if the namespace prefix limitations are very annoying), but it's weird. Worse, people might search for "RDF" and find the confusing W3C trash-fire there. The underlying ideas are good, but that three-letter-acronym should be absolutely eliminated from the spec and documentation. The good thing in LV2 is really just "property-centric design", which can be explained in a simple way anyone can understand. It's more or less just "JSON with URIs" anyway, and nobody ever got fired for using JSON. Speaking of which, syntax-wise, JSON-LD is probably the way to go today. JSON is annoying in different ways, but this would allow LV2 data files to look completely typical to almost any developer, but still have the same meaning and have the same advantages under the hood of a real data model. This could actually be done without breaking anything in practice, but JSON-LD is much harder to implement so I'm not quite there yet. It would also be some work to write the vocabulary (vocabularies?), but it's doable.

  • Lack of quality control: Again a consequence of pragmatic evolution, but the lack of standard quality control has become a real problem. There has been progress made there, with things like lv2lint and lv2_validate, but it's not good enough. The biggest problem with plugins (and plugin hosts) in general is that most of them are just broken. There should be a standard test suite for both, that is as strict as possible, and its use should be strongly "encouraged" at the very least. The above-mentioned existence of standard code in-between hosts and plugins could be useful here, for example, hosts could just refuse to load non-conforming plugins outright.

  • Extension spam: The "standard" extensions are not all very good, or widely supported. They also aren't broken down and organized especially well in some cases. We are at least somewhat stuck with this for compatibility, but it makes things confusing. There are many reasons for this, but in general I think a better thought-out standardisation process, and a "sort of standard" staging ground to put contributions that some implementations agree on but aren't ideal or quite at "recommended standard" yet would help. I'm still not sure exactly how to do this, there's no best practice for such things out there that's easy to steal, but with the benefit of hindsight I think we could do much better.

  • Library spam: The standard host implementation is quite a few libraries. This is a mostly good thing, in that they have distinct purposes, different dependencies, and so on, but in practice it's annoying for packagers, or anyone who wants to vendor it. I think the best approach here is to combine them into a meta-package or "SDK", so libraries can still be properly split but without the maintenance burden. I am working towards this with "lv2kit". It's currently hard for outsiders to even figure out what they need, a one-stop "all the LV2 things" in a single package would help immensely, especially for people outside of the Linux world (where distributions package everything anyway, so nobody really cares).

  • C++ and other language bindings: Plugin interfaces more or less have to be in C. However, outside of POSIXland, nobody wants to actually write C. Virtually the entire audio industry uses C++. Good bindings are important. Python is also nice for some things. Rust would be great, and so on.

The Ugly

These are things that are just... well, ugly. Not really "bad" in concrete ways that matter much, but make life unpleasant all the same.

  • Extensibility only through the URI-based mechanism: In general, extensibility is good. The host can pass whatever features, and plugins can publish whatever interfaces, and everything is discoverable and degrades gracefully and so on. It works. The downside is that there's some syntactic overhead to that which can be annoying. We should have put sizes or versions in structs so they were also extensible in the classical way. For example, the connect_port problem mentioned above could be fixed by adding a new run method, but we can't literally add a new run method to LV2_Descriptor. We would have to make a separate interface, and have the host access it with extension_data, and so on, which makes things ugly. Maybe this is for the best, but ugliness matters. In general there are a few places where we could have used more typical C patterns. Weirdness matters too.

  • Extension organization: The list of specifications is a complete mess. It annoys me so much. I am not really sure about this: in some cases, an extension is a clearly separate thing, and having it be essentially a separate spec is great. In other cases, we've ended up with vaguely related grab-bags of things for lack of anywhere else to put them. I sometimes wonder if the KISS approach of just having one big namespace would have been the right way to go. It would mean less prefixes everywhere at the very least. Maybe we could use some other way of grouping things where it makes sense?

  • Static data: This is a tough one. One of the design principles of LV2 is that hosts don't need to load and run any code to just discover plugins, and information about them. This is great. However, whenever the need for something more dynamic comes along (dynamic ports, say), we don't have any great way to deal with it, because the way everything is described is inherently static. Going fully dynamic doesn't feel great either. I think the solution here is to take advantage of the fact that the data files are really just a syntax and the same data can be expressed in other ways. We already have all the fundamental bits here, Atoms are essentially "realtime-ready RDF" and can be round-tripped to Turtle without loss. My grand, if vague, vision here is that everything could just be the same conceptually, and the source of it be made irrelevant and hidden behind a single API. For example, a data file can say things like (pseudocode alert) <volume> hasType Float; <volume> minimumValue 0.0; <volume> maximumValue 1.0 but a message from a plugin can say exactly the same thing at run time. If the host library (lilv) handled all this nicely, hosts could just do lv2_get_minimum(gain) and not really care where the information came from. I think this is a much better approach than grafting on ever-more API for every little thing, but it would have to be done nicely with good support. I think the key here is to retain the advantages we have, but put some work into making really obvious and straightforward APIs for everything.

  • Overly dynamic URIDs: URIDs are a mechanism in LV2 where things are conceptually URIs (which makes everything extensible), but integers in practice for speed. Generally a URID is made at instantiation time by calling a host-provided mapping function. This is, for the most part, wonderful, but being always dynamic causes some problems. You need dynamic state to talk about URIs at all, which makes for a lot of boilerplate, and gets in the way of things like language bindings (you couldn't make a simple standalone template that gives you an Int atom for an int32_t, for example). I think it would be a good idea to have a static set of URIDs for things in the standard, so that lv2_minimum or whatever is just statically there, but preserve the ability to extend things with dynamic mapping. This is easy enough by adding the concept of a "minimum dynamic URID value", where everything less than that is reserved by the standard. Alternatively, or perhaps in addition, maybe having a standard loader to ease the pain of loading every little thing (like with OpenGL) would help make code cleaner and boilerplate free.

  • The Documentation Sucks: Of course, the documentation of everything always sucks, so you have to take this feedback with a grain of salt, but it's true of LV2. A lot of improvements here are blocked by the specification breakdown being set in stone, but it could be improved. I think the reference documentation is not the problem though, we really need example-driven documentation written as prose. This is a completely different thing to reference documentation and I think it's important to not confuse the two. There has been a bit of work adapting the "book" to be better in this sense, but it's not very far along. Once it's there, it needs to be brought to the forefront, and the reference documentation put in a place where it's clear it's about details. Optics matter.

The Work

I'm sure there are countless things floating around in my mind I've forgotten about at the moment, but that's all that comes to mind at a high level. There are, of course, countless little specific problems that need work (like inventing a control protocol for everything, and having it be powerful but pleasant to use), but I'm only focusing on the greater things about LV2 itself, as a specification family and a project. The big question, of course, is whether LV3 should be a thing. I am not sure, it's a hard question. My thinking is: maybe, but we should work towards it first. It's always tempting to throw out everything and Do It Right, but that never works out. The extensible nature of LV2 means that we can graft better things on over time, until all the various pieces feel right. I see no point in breaking the entire world with a grandiose LV3 project until, for example, we've figured out how we want to control plugins. I am a big believer in iterative design, and working code in general. We can build that in LV2. Maybe we can even do it and end up at more or less LV3 anyway, without causing any hard breakage. To that end, I have been improving things in general, to try and address some of the above, and generally bring the software up to a level of quality I am happy with:

  • Portability: The LV2 host stack has (almost) always been at least theoretically portable, and relatively portable in practice, but it's obvious that it comes from the Linux world and might work elsewhere. I have been doing a lot of work on the DevOps front to ensure that everything works everywhere, always, and no platform is second-class. The libraries live on Gitlab, and have a CI setup that builds and tests on Linux (both x86 and ARM), Windows, and MacOS, and cross-compiles with MinGW.

  • Frequent releases: Another consequence of the many-libraries problem is that releasing is really tedious, and I'm generally pretty bad at making releases. This makes things just feel stale. I've recently almost entirely automated this process, so that everything involved in making a release can be done by just calling a script. Also on the DevOps and stale fronts, I've been moving to automatically generating documentation on CI, so it's always published and up to date. Automating everything is important to keep a project vibrant, especially when maintenance resources are scarce.

  • Generally complex APIs: The library APIs aren't great, and the general situation is confusing. Most authors only need Lilv, but there are these "Serd" and "Sord" things in there that show up sometimes, all work with roughly the same sort of "nodes", but all have different types and APIs for them, and so on. I have been working on a new major version of serd that takes advantage of the API break to make things much simpler, and improve all kinds of things in general. This will be exposed directly in lilv where it makes sense, eliminating a lot of glue, and eliminating the sord library entirely. The lilv API itself is also dramatically bigger and more complicated than it needs to be. At the time, it felt like adding obvious helper methods for every little thing was a good idea, so people can just find lv2_port_get_specific_thing_I_want() which is nice when it's there... except it's not always there. The property-based design of LV2 means that lv2_get(port, specific_thing_I_want) could work for everything (and this ability is already there). This results in situations like people thinking they are blocked by a missing function, and spending a lot of time writing and submitting patches to add them, when the functionality was there all along. It would be easier on everyone if everything just always worked the same general way, and it would make the API surface much smaller which is always nice.

  • Validation: There has been a data validator for a while, but it wasn't great. It didn't, for example, point at the exact position in the file where the error was, you just had to figure that part out. The new version of serd fixes this, so validation errors and warnings use standard GCC format to report the exact position along with a helpful error message, which automatically integrates with almost every editor or IDE on the planet for free.

  • SDK: As mentioned above, I'm working on putting all the "standard" host libraries into a unified "lv2kit" which is the one package you will need to build LV2 things. There are still some details about this I haven't sorted out (e.g. should the spec be in there or not? What about non-LV2-specific libraries like serd? Optional vendoring?), but it's coming along and I think will make it far more realistic to expect people to implement LV2.

  • The spec mess: I am idly thinking about whether or not it would be possible to add a compatibility mechanism to allow us to move URIs without breaking anything. It's largely superficial, but cleaning up the specification list would really help the optics of the project if nothing else. 90% here is trivial (just aggressively map everything forwards), but all the corner cases still need to be thought out.

That's all the work in the trenches going on at the moment to improve the state of LV2. Though I wish I, or anyone else, had the time and energy to invest effort into addressing the more ambitious questions around the plugin API itself, at the moment I am more than tapped out. Regardless, I think it makes sense to get the current state of things in a form that is moving forward and easier to work with, and raise the quality bar as high as possible first. With a very high-quality implementation and extensive testing and validation, I'll feel a lot more confident in addressing some of the more interesting questions around plugin interfaces, and perhaps someday moving towards an LV3.

On that note, feedback is always welcome. Most of the obvious criticism are well-known, but more perspectives are always useful, and silent wheels get no grease. Better yet, issues and/or merge requests are even more welcome. The bus factor of LV2 isn't quite as bad as it seems from the web, but it would help to get more activity on the project itself from anyone other than myself. The standards for API additions and such are pretty high, but there's plenty of low-hanging fruit to be picked.

by drobilla at November 12, 2019 02:35 AM