planet.linuxaudio.org

June 24, 2019

News – Ubuntu Studio

Regarding Ubuntu’s Statement on 32-bit i386 Packages

By now you may have seen Ubuntu’s blog post (Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS) and saw that it mentions Ubuntu Studio as being one of the catalysts behind the change-of-plans. You may also be wondering why Ubuntu Studio would be concerned with this. While we did cease building 32-bit […]

by eeickmeyer at June 24, 2019 08:47 PM

June 19, 2019

Linux – CDM Create Digital Music

VCV Rack hits 1.0; why you need this free modular now

Software modular VCV Rack just hit a major milestone – it’s now officially version 1.0, with polyphony, full MIDI, module browsing, multi-core support, and more. And since it’s a free and open platform, you don’t want to sleep on this.

VCV and developer Andrew Belt have hit on a new formula. Rack is free and open source on Mac, Windows, and Linux, and it’s free for developers to make their own modules. It also has tons of functionality out of the box – both from VCV and third-party developers. But then to support ongoing development, those developers offer some superb paid modules. Once you’re hooked, spending a little extra seems a good investment – because, well, it is.

All those modules… now seen in the new 1.0 visual browser.

Crucially, it’s a good deal for developers as well as users. Independent software developers, VCV included, are able to communicate directly with users, who in turn feel good about supporting the platform and community by spending some money. And hardware makers have a new way of reaching new audiences, as well as offering up try-before-you-buy versions of some of their modules. (Open source hardware makers like Mutable Instruments and Music thing were early adopters, but I hear some other names are coming.)

Maybe you’ve heard all this. But maybe you weren’t quite ready to take the plunge. With version 1.0, the case is getting pretty strong for adding Rack to your arsenal. Rack was appealing early on to tinkerers who enjoyed messing around with software. But 1.0 is starting to look like something you’d rely on in your music.

And that starts with polyphony, as shown by the developer of the VULT modules, which include many of my own personal favorites:

Rack 1.0

1.0 is really about two things – new functionality for more flexible use in your music, and a stable API for developers underneath that makes you feel like you’re using modules and not just testing them.

Mono- to polyphonic, on demand. Modules that want to support polyphony now can add up to 16 voices. Cables support polyphony. And the built-in modules have added tools for polyphonic use of course, too.

Polyphony, now a thing – and nicely implemented, both in UI and performance under the hood.

Multi-core accelerated engine. Adding polyphony, even on newer machines, means a greater tax on your CPU. There are a number of under-the-hood improvements to enable that in Rack, including multi-core support, threading, and hardware acceleration. This is also partly built into the platform, so third-party modules supporting Rack will get a performance boost “for free,” without developers having to worry about it or reinvent the wheel.

Adjustable performance: From the menu you can now adjust CPU performance based on whether you want lower CPU usage or more modules.

Adjust priority of the CPU based on your needs (more modules with higher CPU usage, or fewer modules but lower CPU).

MIDI out. You could always get MIDI into Rack, but now you can get it out, too – so you can use sequencers and modulation and so on to control other equipment or via inter-app MIDI routing, other software. There are three new modules – CV-GATE, CV-MIDI, and CV-CC. (VCV describes those as being suitable for drum machines, synths, and Eurorack and talks about hardware, but you could find a lot of different applications for this.)

Assign MIDI control easily. Previously, controlling Rack has been a bit of a chore: start with a MIDI input, figure out how to route it into some kind of modulation, assign the modulation. Many software racks work this way, but it feels a bit draconian to users of other software. Now, via the MIDI-MAP module, you can click a parameter onscreen and just move a knob or fader or what have you on your controller – you know, like you can do in other tools.

That will be essential for actually playing your patches. I can’t wait to use this with Sensel Morph and the Buchla Thunder overlay but… yeah, that’s another story. Watch for that in the coming days.

Meet the new MIDI modules, which now support output, mapping, and even MPE.

Numeric pad input as well as revised gamepad support. Now in addition to gamepads (which offer some new improvements), you can hook up numeric keyboards:

MPE support: MPE (MIDI Polyphonic Expression) now works with MIDI-CV. That makes Rack a fascinating new way of controlling MPE instruments.

Enter parameters manually. You can also now right-click a parameter and type in the value you want.

Browse modules visually. All the previous options for navigating your collection of virtual modules textually are still there – type module names, use tags, search by manufacturer or type. But now you also get a pretty visual browser so you can spot the module you want at a glance, and click and drag to drop modules into place. VCV isn’t the first computer modular to offer this – Softube has an awfully pretty browser, for one – but I find the Rack 1.0 browser to be really quick and easy. And it’s especially needed here as you quickly accumulate loads of modules from the Web.

Get new modules by sorting by build. This feature is actually on the VCV website, but it’s so important to how we work in Rack that it’s worth a mention here. Now you can search by build date and find the latest stuff.

Sort by build now on the plugins interface on the Web.

Move and manage modules more easily. You can now disable modules, force-drag them into place, and use a new, more flexible rack. The rack is also now infinite in all four dimensions, which is a bit confusing at first, but in keeping with the open-ended computer software ethos of software modular. (Take that, you Eurorack people who live in … like … normal physical space!)

You can also right-click modules to get quick links to plugin websites, documentation, and even source code. And you can see changelogs before you update, instead of just updating and finding out later.

Undo/redo history. At last, experiment without worry.

Parameter tooltips. No need to guess what that knob or switch is meant to do.

You can check out the new features in detail on the changelog (plus stuff added since 1.0, in case you live in the future and me in the past!):

https://github.com/VCVRack/Rack/blob/v1/CHANGELOG.md

Or for even more explanation, Nik Jewell describes what all those changes are about:

An unofficial guide to the Rack v1 Changelog

Getting started

Rack 1.0 will break compatibility with some modules, while you wait on those developers to update to the new API (hopefully). Andrew tells us we can run the old (0.6.x) and new Rack versions side by side:

To install two versions that don’t clash, simply install Rack v1 to a different folder such as “Program Files/VCV/Rack-v1” on Windows or “/Applications/Rack-v1” on Mac. They will each use their own set of plugins, settings, etc.

You can duplicate your Rack folder, and run the two versions side by side. Then you’re free to try the new features while still opening up your old work. (I found most of my previous patches, even after updating my modules, wound up missing modules. Rack will make the incompatible modules disappear, leaving the compatible ones in place.)

Right from the moment you start up VCV Rack 1.0, you’ll find some things are more approachable, with a new example patch and updated Scope. And for existing users, be prepared that the toolbar is gone, now replaced with menu options.

Here are some useful shortcuts for getting around the new release:

Now you can right-click a plug-in for an updated contextual menu with presets, and links to the developer’s site for documentation and more.

Double-click a parameter: initialize to default value

Right-click a parameter: type to enter a specific value.

Ctrl-click a connected input, and drag: clones the cable connected there to another port. (This way you can quickly route one output to multiple inputs, without having to mouse back to the output.)

Ctrl-E: Disables a module. (You can also choose the context menu.)

Ctrl- / Ctrl+ to zoom, or hold down control and use a scroll wheel.

Ctrl-drag modules. This is actually my favorite new feature, weirdly. If you control drag a module, it shoves other modules along with it into any empty space. It’s easier to see that in an animation than it is to describe it, so I’ll let Andrew show us:

Do check out the Recorder, too:

All the new internal modules to try out:
CV-MIDI
CV-CC
CV-Gate
MIDI-Map
Recorder

And developers, do go check out the migration guide.

Full information:

https://vcvrack.com/

The post VCV Rack hits 1.0; why you need this free modular now appeared first on CDM Create Digital Music.

by Peter Kirn at June 19, 2019 01:25 PM

June 17, 2019

Linux – CDM Create Digital Music

Sonarworks Reference 4.3: more headphones, features for calibrating your mixes

Sonarworks Reference 4.3 has a bunch of new features – more headphones, better performance, and it won’t blind you in a dark studio. The goal: make sure your mixes sound consistent everywhere. And with both high-end and consumer cans on the supported list, they seem to want everybody to give this a go.

I think the biggest challenge Sonarworks has here is that even I would have imagined calibration was something for engineers, but not necessarily producers. But once you hear the results, anyone can hear what this does. The thing is, headphones and studio monitors really aren’t flat. And especially outside of perfectly tuned studio environments, neither are working environments. Testing and calibration improves that enough that anyone can hear.

I’ve been using Sonarworks Reference religiously since the fall. The biggest challenge has been that there are two modes. One sounds really great, but adds a ton of latency. That’s especially rough if you want to work with calibration switched on all the time. The other is low-latency, but doesn’t sound as good. Those differences are, again, noticeable to anyone.

The big improvement in Reference 4.3 is to let you have both – with a mixed filter mode that operates with minimal latency but still delivers accurate results. That for me makes Reference way more useful. In fact, given this involved a ground-up rewrite, I’m surprised Sonarworks didn’t call this Reference 5. (It’s a free update, though!)

You also get new headphone profiles, which show both some high-end Beyerdynamic models but also the sort of consumer listening cans a lot of us use on the go with our smartphones and such. Those seem to target new users as well as ones traveling.

Beyerdynamic Custom Studio
Beyerdynamic MMX 300
Direct Sound EXTW37 Pro
Direct Sound Serenity Plus
Direct Sound Studio Plus
Marshall Major III
Marshall Major III Bluetooth
Marshall Monitor Bluetooth

More important than the addition of new individual models, though, they’ve added on-demand profile delivery, so you can add support inside the tool. There are already over a couple hundred of these, and they keep adding more.

There are some other improvements, too:

Dark mode – some people hate these; I love them, since I work in a lot of dim / late night environments

A better menu/tray bar, which is critical as you modify settings as you work

Integrated room measurement inside the Systemwide and plugin tools

Better virtual sound device performance (I need to test this across my Mac and Windows machines)

The little tool that gets you up and running when you start I already liked, and wrote about previously, but they’ve enhanced that even more

The new tool for getting started. Before.

After.

Previously, I did some deep dives into this software and answered reader questions:

What it’s like calibrating headphones and monitors with Sonarworks tools

Your questions answered: Sonarworks Reference calibration tools

I need to follow up with them on how Linux support is coming, as CDM was the first to write about that and some of you I know were interested (as am I)!

Also since I last covered Reference, Sonarworks has started offering a bundle with pre-calibrated headphones. These theoretically deliver more precise calibration than what you’d get from any profile, since they’ve tested the actual harder. It’s pricey, because it includes the already-expensive HD650 headphones from Sennheiser.

But those are terrific headphones, and headphones are crucial to precise mixing and mastering. I imagine these would be a great investment for a producer or especially studio or engineer wanting to invest in a full calibration package at once. (Feel free to shout about whether Sennheiser or Beyerdynamic are better in comments, though!) In fact, I think if you’re thinking of buying the HD650s, you should spring for the bundle.

The bundle, with Sennheiser HD650.

I have talked to more producers about this tool than engineers (though my mastering engineer collaborator is a believer), so I would be interested to hear about that use case more.

And yes, this is another member of our music tech industry now located in Riga, Latvia, along with Gamechanger, Erica Synths, and others. It’s a surprising new hotbed.

The post Sonarworks Reference 4.3: more headphones, features for calibrating your mixes appeared first on CDM Create Digital Music.

by Peter Kirn at June 17, 2019 01:56 PM

June 12, 2019

GStreamer News

GStreamer Conference 2019 announced to take place in Lyon, France

The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday-Friday 31 October - 1 November 2019 in Lyon, France.

You can find more details about the conference on the GStreamer Conference 2019 web site.

A call for papers will be sent out in due course. Registration will open at a later time. We will announce those and any further updates on the gstreamer-announce mailing list, the website, and on Twitter.

Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!

We also plan to have sessions with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. Lightning talk slots will be allocated on a first-come-first-serve basis, so make sure to reserve your slot if you plan on giving a lightning talk.

There will also be a social event again on Thursday evening.

There are also plans to have a hackfest the weekend right after the conference on 2-3 November 2019.

We hope to see you in Lyon, and please spread the word!

June 12, 2019 02:00 PM

June 08, 2019

digital audio hacks – Hackaday

C++ Reverbs From A Matlab Design

The guitar ‘Toing’ sound from the ’70s was epic, and for the first time listener it was enough to get a bunch of people hooked to the likes of Aerosmith. Reverb units were all the rage back then, and for his DSP class project, [nebk] creates a reverb filter using Matlab and ports it to C++.

Digital reverb was introduced around the 1960s by Manfred Schroeder and Ben Logan. The system consists of essentially all pass filters that simply add a delay element to the input signal and by clubbing a bunch together and then feeding them to a mixer. The output is then that echoing ‘toing’ that made the ’80s love the guitar so much. [Nebk]’s take on it enlists the help of the Raspberry Pi and C++ to implement the very same thing.

In his writeup, [nebk] goes through the explaining the essentials of a filter implementation in the digital domain and how the cascaded delay units accumulate the delay to become a better sounding system. He also goes on to add an FIR low pass filter to cut off the ringing which was consequent of adding a feedback loop. [nebk] uses Matlab’s filter generation tool for the LP filter which he includes the code for. After testing the design in Simulink, he moves to writing the whole thing in C++ complete with the filter classes that allows reading of audio files and then spitting out ‘reverbed’ audio files out.

The best thing about this project is the fact that [nebk] creates filter class templates for others to play with. It allows those who are playing/working with Matlab to transition to the C++ side with a learning curve that is not as steep as the Himalayas. The project has a lot to learn from and is great for beginners to get their feet wet. The code is available on [GitHub] for those who want to give it a shot and if you are just interested in audio effects on the cheap, be sure to check out the Ikea Reverb Plate that is big and looks awesome.

by Inderpreet Singh at June 08, 2019 08:00 AM

June 06, 2019

News – Ubuntu Studio

Updates for June 2019

We hope that Ubuntu Studio 19.04’s release has been a welcome update for our users. As such, we are continuing our work on Ubuntu Studio with our next release scheduled for October 17, 2019, codenamed “Eoan Ermine”. Bug Fix for Ubuntu Studio Controls A bug identified in which the ALSA-Jack MIDI bridge was not surviving […]

by eeickmeyer at June 06, 2019 09:08 PM

blog4

Aalborg concert Tina MK Madsen 7. June 1000fryd

Tina Mariane Krogh Madsen will premier her sound piece ‘haecceity’ at 1000fryd in Aalborg, Denmark on Friday June 7th.

Link to facebook event: https://www.facebook.com/events/2157333597637862/
Event on 1000fryd’s website: http://1000fryd.dk/index.php?ufo=koncert&id=3942

by herrsteiner (noreply@blogger.com) at June 06, 2019 09:06 PM

May 31, 2019

GStreamer News

GStreamer 1.14.5 stable bug fix release

The GStreamer team is pleased to announce another bug fix release in the old stable 1.14 release series of your favourite cross-platform multimedia framework.

This release only contains bugfixes and it should be safe to update from 1.14.x.

The 1.14 series has now been superseded by the new stable 1.16 series, and we recommend you upgrade at your earliest convenience.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be made available soon.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

May 31, 2019 11:00 PM

rncbc.org

Qtractor 0.9.8 - Yet Another Spring'19 Release


Hi all,

Too hot to make any boring introduction, so let's get this straight:

Qtractor 0.9.8 (spring'19) is released!

The changes for this yet another seasonal release are as follows:

  • Plugin-lists and respective plugins state may now be exported and/or imported as XML files.
  • When in Drum Mode, Key and Scale are meaningless and thus functionally disabled from the MIDI clip editor (aka. piano-roll).
  • MIDI clip editor's View > Ghost Track menu option is now finally a reality: show any existing MIDI track and its respective clips in the background as dimmed, semi-transparent aka. ghost events.
  • Minor update to Debian packaging control file.
  • Make sure partially selected clips are reset to whole when Shift/Ctrl keyboard modifiers are in effect, to prevent extraneous clip splits or cutaways afterwards.

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
https://qtractor.sourceforge.io
http://qtractor.sourceforge.net

Project page:

https://sourceforge.net/projects/qtractor

Downloads:

https://sourceforge.net/projects/qtractor/files

Git repos:

https://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

https://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Have fun.

Donate to rncbc.org

by rncbc at May 31, 2019 06:00 PM

May 25, 2019

rncbc.org

QjackCtl 0.5.8 - A Spring'19 Release

Howdy!

QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.5.8 (spring'19) is now released!

QjackCtl is a(n ageing but still, modernized) Qt application to control the JACK sound server, for the Linux Audio infrastructure.

Website:

https://qjackctl.sourceforge.io
http://qjackctl.sourceforge.net

Project page:

https://sourceforge.net/projects/qjackctl

Downloads:

https://sourceforge.net/projects/qjackctl/files

Git repos:

https://git.code.sf.net/p/qjackctl/code
https://github.com/rncbc/qjackctl.git
https://gitlab.com/rncbc/qjackctl.git
https://bitbucket.com/rncbc/qjackctl.git

Change-log:

  • When enabled the current default preset settings are now read from the last known JACK D-BUS configuration.
  • Minor update to Debian packaging control file.
  • Removed all the remaining leftovers of old pre-FFADO 'freebob' driver support.

License:

QjackCtl is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep having fun!

by rncbc at May 25, 2019 11:00 AM

April 29, 2019

Internet Archive - Collection: osmpodcast

An error occurred

The RSS feed is currently experiencing technical difficulties. The error is: invalid or no response from Elasticsearch

April 29, 2019 09:01 PM

April 20, 2019

digital audio hacks – Hackaday

“Vintage” Radio Gets a Modern Makeover

Taking an old piece of gear and cramming it full of modern hardware is a very popular project. In fact, it’s one of the most common things we see here at Hackaday, especially in the Raspberry Pi era. The appeal is obvious: somebody has already done the hard work of designing and building an attractive enclosure, all you need to do is shoehorn your own gear into it. That being said, we know some of our beloved readers get upset when a vintage piece of gear gets sacrificed in the name of progress.

Thankfully, you can put your pitchforks down for this one. The vintage radio [Freshanator] cannibalized to build this Bluetooth speaker is actually a replica made to invoke the classic “cathedral” look. Granted it may still have been older than most of the people reading this right now, but at least it wasn’t actually from the 1930’s.

To start the process, [Freshanator] created a 3D model of the inside of the radio so all the components could be laid out virtually before anything was cut or fabricated. This included the design for the speaker box, which was ultimately 3D printed and then coated with a spray-on “liquid rubber” to seal it up. The upfront effort and time to design like this might be high, but it’s an excellent way to help ensure you don’t run into some roadblock halfway through the build.

Driving the speakers is a TPA3116-based amplifier board with integrated Bluetooth receiver, which has all of its buttons broken out to the front for easy access. [Freshanator] even went the extra mile and designed some labels for the front panel buttons to be made on a vinyl cutter. Unfortunately the cutter lacked the precision to make them small enough to actually go on the buttons, so they ended up getting placed above or next to them as space allowed.

The build was wrapped up with a fan installed at the peak of the front speaker grille to keep things cool. Oh, and there are lights. Because there’s always lights. In this case, some blue LEDs and strategically placed EL wire give the whole build an otherworldly glow.

If you’re interested in a having a frustrating quasi-conversation with your vintage looking audio equipment, you could always cram an Echo Dot in there instead. Though if you go that route, you can just 3D print a classic styled enclosure without incurring the wrath of the purists.

by Tom Nardi at April 20, 2019 08:00 PM

April 16, 2019

Talk Unafraid

Adventures in Differential Flexure

How’s that for a thrilling title? But this topic really does encapsulate a lot of what I love about astrophotography, despite the substantial annoyance it’s caused me lately…

Long exposure of M51 in Hydrogen Alpha – 900s

My quest for really nice photos of galaxies has, inevitably, driven me towards narrowband imaging, which can help bring out detail in galaxies and minimise light pollution. I bought a hydrogen alpha filter not long ago – a filter that removes all of the light except from a hydrogen emission line, a deep red narrow band of light. This filter has the unfortunate side effect of reducing the total amount of light hitting the sensor, meaning that long exposures are really required to drive the signal far above the noise floor. In the single frame above, the huge glow from the right is amplifier glow – an issue with the camera that grows worse the longer my exposures. Typically, this gets removed by taking dozens of dark frames with a lens cap on and subtracting the fixed amplifier glow from the frames, a process called calibration. The end result is fairly clean – but what about these unfortunate stars?

Oblong stars are a problem – they show that the telescope failed to accurately track the target for the entire period. Each pixel in this image (and you can see pixels here, in the hot pixels that appear as noise in the close-up) equates to 0.5″ of sky (0.5 arc-seconds). This is about two to four times my seeing limit (the amount of wobble introduced by the atmosphere) on a really good night, meaning I’m over-sampling nicely (Nyquist says we should be oversampling 2x to resolve all details). My stars are oblong by a huge amount – 6-8″, if not more!

My guide system – the PHD2 software package, an ASI120MC camera and a 60mm guidescope – reported no worse than 0.5″ tracking all night, meaning I should’ve seen perfectly round stars. So what went wrong?

The most likely culprit is a slightly loose screw on my guidescope’s guiding rings, which I found after being pointed at a thing called “differential flexure” by a fantastic chap on the Stargazer’s Lounge forums (more on that later). But this is merely a quite extreme example of a real problem that can occur, and a nice insight into the tolerances and required precision of astronomical telescopes for high-resolution imaging. As I’m aiming for 0.5″ pixel accuracy, but practically won’t get better seeing than 1-2″, my guiding needs to be fairly good. The mount, with excellent guiding, is mechanically capable of 0.6-0.7″ accuracy; this is actually really great, especially for a fairly low-cost mount (<£1200). You can easily pay upwards of £10,000 for a mount, and not get much better performance.

Without guiding though it’s not terribly capable – mechanical tolerances aren’t perfect in a cheap mount, and periodic error from the rotation of worm gears creeps in. While you can program the mount to correct for this it won’t be perfect. So we have to guide the mount. While the imaging camera takes long, 5-10 minute exposures, the guiding camera takes short 3-5 second exposures and feeds software (in my case, PHD2) which tracks a star’s centre over time, using the changes in that centre to generate a correction impulse which is sent to the mount’s control software (in my case, INDI and the EQmod driver). This gets us down to the required stability over time.

My Primaluce Lab 60mm guidescope and ASI120MC guide camera on the “bench”, in PLL 80mm guidescope rings on ADM dovetails

The reason why my long exposures sucked, despite all this, is simple – my guide camera was not always changing its orientation as the imaging camera was. That is to say, when the mount moved a little bit, or failed to move, while the imaging camera was affected the guiding camera was not. This is called differential flexure – the difference in movement between two optical systems. Fundamentally, this is because my guidescope is a completely separate optical system to my main telescope – if it doesn’t move when my main scope does, the guiding system doesn’t know to correct! The inverse applies, too – maybe the guidescope moves and overcorrects for an imaging system that hasn’t moved at all.

With a refractor telescope, if you just secure your guidescope really well to the main telescope, all is (generally) well. That is the only practical potential source of error, outside of focuser wobble. In a Newtonian such as the one I use, though, there’s plenty of other sources. At the end of a Newtonian telescope is a large mirror – 200mm across, in my case. This is supported by a mirror cell – pinching the mirror can cause huge deviation (dozens or hundreds of nanometers, which is unacceptable), so just clamping it up isn’t practical. This means that as the telescope moves the mirror can move a little bit – not much, but enough to move the image slightly on the sensor. While moving the mount isn’t an ideal way to fix this movement – better mirror cells reduce this movement – it’s better than doing nothing at all. The secondary mirror has similar problems. The tube itself can also expand or contract, being quite large – carbon fibre tubes minimise this but are expensive. Refractors have, broadly, all their lenses securely held in place without issue and so don’t suffer these problems.

And so the answer seems to be a solution called “Off Axis Guiding”. In this system, rather than using a separate guide scope, you use a small prism inserted in the optical train (after the focuser but before the camera) to “tap” a bit of the light off – usually the sensor is a rectangle in a circular light path meaning this is pretty easy to achieve without any impact to the light that the sensor receives. This light is bounced into a camera mounted at 90 degrees to the optical train, which performs the guiding function. There are issues with this approach – you have a narrower (and hard to move) field of view, and you need a more sensitive guide camera to find stars – but the resolution is naturally far higher (0.7″ rather than 2.5″) due to the longer focal length and so the potential accuracy of guide corrections improves. But more importantly, your guiding light shares fate with the imaging light – you use the same mirrors, tube, and so on. If your imaging light shifts, so does the guiding light, optically entwined.

The off-axis guiding route is appealing, but complex. I’ll undoubtedly explore it – I want to improve my guide camera regardless, and the OAG prism is “only” £110 or thereabouts. The guide camera is the brunt of the cost – weighing in at around £500-700 for a quality high-sensitivity guide camera.

But in the immediate future my budgets don’t allow for either of these solutions and so I’ve done what I can to minimise the flexure of the guidescope relative to the main telescope. This has focused on the screws used to hold the guidescope in place – they’re really poorly machined, along with the threads in the guidescope rings, and the plastic tips can lead to flexure.

Before and after – plastic-tipped screws

I’ve cut the tips almost back to the metal to minimise the amount of movement in compression, and used Loctite to secure two of the three screws in each ring. The coarse focus tube and helical focuser on the Primaluce guide scope also have some grub screws which I’ve adjusted – this has helped considerably in reducing the ability for the camera to move.

Hopefully that’ll help for now! I’m also going to ask a friend with access to CNC machines about machining some more solid tube rings for the guidescope; that would radically improve things, and shouldn’t cost much. However, practically the OAG route is going to be favourite for a Newtonian setup – so that’s going to be the best route in the long run.

Despite all this I managed a pretty good stab at M51, the Whirlpool Galaxy. I wasn’t suffering from differential flexure so much on these exposures – it’s probably a case of the pointing of the scope being different and so not hitting the same issue. I had two good nights of really good seeing, and captured a few hours of light. This image does well to highlight the benefits of the Newtonian setup – with a 1000mm focal length with a fast focal ratio, paired with my high-resolution camera, I can achieve some great detail in a short period of time.

M51, imaged over two nights at the end of March
Detail, showing some slightly overzealous deconvolution of stars and some interesting features

Alongside my telescope debugging, I’m working on developing my observatory plans into a detailed, budgeted design – more on that later. I’ve also been tinkering with some CCDinspector-inspired Python scripts to analyse star sharpness across a large number of images and in doing so highlight any potential issues with the optical train or telescope in terms of flatness, tilt, and so on. So far this tinkering hasn’t lead anywhere interesting, which either suggests my setup is near perfect (which I’m sure it isn’t) or I’m missing something – more tinkering to be done!

Map of sharpness across 50 or so luminance frames, showing a broadly even distribution and no systemic sharpness deviance

by James Harrison at April 16, 2019 10:51 PM

April 15, 2019

KXStudio News

Carla 2.0.0 is finally here!

After many years, Carla version 2.0.0 is finally here!

Carla is an audio plugin host, with support for many audio drivers and plugin formats.
It has some nice features like automation of parameters via MIDI CC (and send output back as MIDI too) and full OSC control.

Version 2.0 took this long because I was never truly happy with its current state, often pushing new features but not fully finishing them.
So the "solution" was to put everything that is not considered stable yet behind an experimental flag in the settings.
This way we can have our stable Carla faster, while upcoming features get developed and tagged as experimental during testing.

Preparations for version 2.1 are well under way, a beta for it will be out soon.
But that is a topic for another day.

Changes since 2.0-RC4

  • Fix missing argument in note-on/off osc example
  • Fix word-wrap in add-plugin dialog
  • Fix Windows README.txt line endings
  • Build windows binaries with -mstackrealign
  • Don't show side panel in carla-control
  • Show "Label/URI" instead of just "Label"
  • Keep application signals alive (so Ctrl+C works even while engine is closed)
  • Update copyright year

Downloads

To download Carla binaries or source code, jump on over to the KXStudio downloads section.
Carla v2.0.0 is available pre-packaged in the KXStudio repositories and UbuntuStudio backports, plus on ArchLinux and Ubuntu since 19.04. On those you can simply install the carla package.
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.

Videos

There is no manual or quick-start guide for Carla yet, apologies for that.
But there are some videos of presentations I did regarding Carla's features and workflows, those should give you an introduction of its features and what you can do with it:

@ Sonoj 2017

@ LAC 2018

by falkTX at April 15, 2019 06:17 AM

March 22, 2019

KXStudio News

Changes in KXStudio repos, regarding Carla and JACK2

This is a small notice to everyone using Carla and JACK2 with the KXStudio repos.

First, in preparation for Carla 2.0 release, the (really) old carla package is now the new v2.0 series, while carla-git now contains the development/latest version.
If you are not interested in testing Carla's new stuff and prefer something known to be more stable, install the carla package after the latest updates.

Second, a change in JACK2 code has made it so a restart of the server is required after the update.
(but for a good reason, as JACK2 is finally getting meta-data support; this update fixes client UUIDs)
If you use jackdbus (likely with KXStudio stuff), you will need to actually kill it.
If that does not work, good old restart is your friend. :)

One important thing to note is that the lmms package now conflicts with the carla-git one.
This is because some code has changed in latest Carla that makes v2.0 vs development/latest ABI-incompatible.
In simpler terms, LMMS can only either be compiled against the stable or development version of Carla.
The obvious choice is to use the stable version, so after the updates if you notice LMMS is missing, just install it again.
(If you have carla-git installed, carla will be installed/switched automatically)

I tried to make the transition of these updates as smooth as possible, but note that you likely need to install updates twice to complete the process.

In other news, we got a new domain!^-^)/
Also Carla v2.0 release date has been set - 15th of April.
Unless a major issue is found, expect a release announcement on that day.
See you soon then! ;)

by falkTX at March 22, 2019 12:58 AM

March 19, 2019

blog4

walking as philosophy and artistic practice

Tina Madsen has another lecture coming up this Wednesday in Aalborg, its about walking as philosophy and artistic practice (in Danish):
https://bit.ly/2HMlCgG


by herrsteiner (noreply@blogger.com) at March 19, 2019 01:56 PM

February 23, 2019

The Ardour Youtube Channel is here

@paul wrote:

ardour.org is pleased to announce a new youtube channel focused on videos about Ardour.

We decided to support Tobiasz “unfa” Karon in making some new videos, based on some of the work he has done in other contexts (both online and at meetings). unfa’s first video won’t be particularly useful for new or existing users, but if you’re looking for a “promotional video” that describes what Ardour is and what it can do, this may be the right thing to point people at.

In the near-term future, unfa will be back with some tutorial videos, so please consider subscribing to the channel.

Thanks to unfa for this opening video, and we look forward to more. If people have particular areas that they’d like to see covered, mention it in the comments here (or on the YT channel).

Posts: 21

Participants: 10

Read full topic

by @paul Paul Davis at February 23, 2019 06:53 PM

February 09, 2019

Talk Unafraid

How to fail at astrophotography

This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.

So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.

So I did some research, and concluded that:

  • Astrophotography has the potential to be done cheaply but some bits do demand some investment
  • Wide-field is cheapest to do, since a telescope isn’t needed; planetary is way cheaper than deep-sky (depending on the planet) to kit out for, but to get really good planetary images is hard
  • Good telescopes are seriously expensive, but pretty good telescopes are accessibly cheap, and produce pretty good results
  • Newtonians (Dobsonians, for visual) give the absolute best aperture-to-cash return
  • Having a good mount that can track accurately is absolutely key
  • You can spend a hell of a lot of cash on this hobby if you’re not careful, and spending too little is the fastest path there…

So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.

Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.

As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.

The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.

Jupiter; the great red spot is just about visible. If you really squint you can see a few pixels that are, I swear, moons.

This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.

Working out what questions to ask

The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.

M31 Andromeda; repaired a bit in PixInsight from my original, still kinda terrible

Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…

Raw stack from an evening of longer-exposure imaging of NGC891; the noise is the sensor error. I hadn’t quite cracked image processing at this point.

It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.

My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.

M31/M110 Andromeda, wider field shot, Nikon D1x – first light, processed with DeepSkyStacker and StarTools

Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.

M33 Triangulum, D1x, processed with DeepSkyStacker and PixInsight

I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!

Stepping on up to mono

And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.

By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.

The scope and guidescope, preparing for an evening of imaging on a rare weekend clear night
M33 Triangulum; first light with the ASI183MM-PRO. A weird light leak artefact can be seen clearly in the middle of the image, near the top of the frame

Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.

The current finished imaging train – Baader MPCC coma corrector, Baader VariLock T2 spacer, ZWO mini filter wheel, ASI183MM-PRO

But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.

I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.

I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.

It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.

To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.

Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.

It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).

… okay, maybe I’d still like to get a proper flat field generator, but the “t-shirt at dusk” method works pretty well and only cost £10 for a white t-shirt

Settling in to new digs

Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.

Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.

M33 Triangulum, showing clearly now the various small nebulas and colourful objects around the main galaxy. The first image I was genuinely gleeful to produce and share as widely as I could.
The Heart Nebula, not quite centred up; the detail in the nebulosity, even with this wideband image, is helped tremendously by the pixel oversampling I achieve with my setup (0.5 arcseconds per pixel)

But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.

So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.

by James Harrison at February 09, 2019 10:03 PM

January 10, 2019

Bug tracker updated

@paul wrote:

tracker.ardour.org has been upgraded from an ancient version of Mantis to the most current stable release. The website looks very different now, but all existing bug reports and user accounts are still there. We hope to find some way to limit the bug-report spam that has recently turned into a small but annoying time waster, and ultimately to enable single-sign on with the rest of ardour.org.

Posts: 5

Participants: 4

Read full topic

by @paul Paul Davis at January 10, 2019 06:13 PM

January 07, 2019

linux-audio « WordPress.com Tag Feed

Video: Aircraft battle Rosedale blaze from sky at night

The fire at Rosedale, which has burned more than 11,500ha across an 85km perimeter, started at an is

January 07, 2019 02:10 AM