March 19, 2019

digital audio hacks – Hackaday

The CD Is 40, The CD Is Dead

The Compact Disc is 40 years old, and for those of us who remember its introduction it still has that sparkle of a high-tech item even as it slides into oblivion at the hands of streaming music services.

There was a time when a rainbow motif was extremely futuristic. Bill Bertram (CC BY-SA 2.5)
There was a time when a rainbow motif was extremely futuristic. Bill Bertram (CC BY-SA 2.5)

If we could define a moment at which consumers moved from analogue technologies to digital ones, the announcement of the CD would be a good place to start. The public’s coolest tech to own in the 1970s was probably an analogue VCR or a CB radio, yet almost overnight they switched at the start of the ’80s to a CD player and a home computer. The CD player was the first place most consumers encountered a laser of their own, which gave it an impossibly futuristic slant, and the rainbow effect of the pits on a CD became a motif that wove its way into the design language of the era. Very few new technologies since have generated this level of excitement at their mere sight, instead today’s consumers accept new developments as merely incremental to the tech they already own while simultaneously not expecting them to have longevity.

The Origins Of The Format

It isn't only audio that's improved in quality in the digital age, a magazine-quality promotional shot of the Philips prototype from Elektuur magazine, from Elektuur 188, June 1979. (Public domain mark 1.0)
It isn’t only audio that’s improved in quality in the digital age, a magazine-quality promotional shot of the Philips prototype from Elektuur magazine, from Elektuur 188, June 1979. (Public domain mark 1.0)

The format had its roots in contemporary consumer video technologies, with which in parallel research programmes both Sony and Philips were working on next-generation audio products. Sony had showcased a digital audio system using its video tape format in the early 1970s, while Philips had investigated an analogue system similar to LaserDisc video discs. By the middle of the decade both companies had produced prototype optical audio discs that were not compatible but were similar enough for them to investigate a collaboration. The 1979 prototype players with their 120 mm polycarbonate discs containing over an hour of 44.1 kHz 16-bit stereo audio  were the result, and books and magazines with a futuristic outlook featured the prototype players along with the inevitable rainbow shot of a CD as the Way of the Future.

TV shows such as the BBC’s Tomorrow’s World made extravagant claims about the new format’s durability compared to vinyl LPs, leading to an expertly marketed fever pitch of expectation   The Philips silver top-loading player might have looked good, but consumers would have to wait a few more years until 1982 before the first commercially available models hit the stores.

How Does A CD Player Work?

The CD player’s mode of operation might have seemed impossibly high-tech to the general public in 1980, but when it is laid out into its fundamentals it is refreshingly understandable and considerably simpler than the analogue VCR so many of them would have sat next to in an ’80s living room. At the end of the 1980s it was the example used to teach all sorts electronic control topics to electronic engineering students at my university, when we were all familiar with the format but probably most of us didn’t have the cash to own one of our own.

An annotated picture of the CD player laser assembly. Zim 256 [CC BY-SA 3.0]
An annotated picture of the CD player laser assembly. Zim 256 [CC BY-SA 3.0]
The business end of a CD player has surprisingly few moving parts. It is contained in a combined laser and sensor module which is mounted on a sliding actuator usually driven via a worm drive by a small motor. An infra-red laser diode shines into a prism which directs its light downwards at right angles through a lens towards a spinning CD. The lens has a focus mechanism, usually a set of coils and a magnet, allowing it to float on a magnetic field. Light is reflected back from the CD and passes directly upwards through the prism to land on an array of four photodiodes. At ideal tracking and focus the reflected light should be concentrated in the centre of the array, so by monitoring the current produced by each photodiode the player can adjust the focus, disc speed, and linear position of the laser module to keep everything on the track and retrieving a clean data stream at the right data rate.

The analogue signal from the diode array contains the data stream produced as the beam traverses the pits and lands on the CD, and a one-bit front-end simply digitizes these into bits.. These bits are assembled into data frames that have been encoded in a form designed to maximise the recoverability of the stream by encoding each byte of data into a 14-bit word intended to reduce the instantaneous bandwidth of the stream by avoiding single logic ones and zeros.  This decoding is performed using a look-up table, resulting in a 16-bit data stream with Reed-Solomon error correction applied. The error correction step is performed, and the result is fed to a DAC to produce the audio signal. There are many variations and enhancements to the system that have been created by various manufacturers over the years, but at its heart the CD player remains a surprisingly simple device.

Whatever Happened To The CD?

The Commodore CDTV. Patric Klöter (CC BY-SA 3.0) via Wikimedia Commons.
The Commodore CDTV. Patric Klöter ( CC BY-SA 3.0).

The heyday of the CD probably came in the 1990s, when players had moved out of the realm of the wealthy audiophile and into the cheap consumer electronics stores. A portable CD player could be had for a very affordable sum, and they began to oust the Walkman-style cassette player as the choice for music on the go. Meanwhile the CD-ROM followed a similar path to affordability, and no mid-1990s beige-box PC was complete without a CD-ROM drive and a multimedia encyclopedia. There were other CD-based appliances, multimedia appliances such as Philips’ CD-i and  Commodore’s CDTV Amiga in a black box , Video CDs, and of course a crop of CD-based game consoles. The CD was largely responsible for the huge success of Sony’s first-generation PlayStation, while cartridge-based consoles had required developers to pay up front for a vast inventory of cartridges that might have become landfill if the product flopped, PlayStation developers merely had to pay for the CDs produced.

While the gaming public were going crazy about their PlayStations and listening to drum-n-bass on their Discmans, the writing was on the wall for the CD format. In 1998 the MPMan MP3 player made its debut, quickly followed by the first Diamond Rio, then a host of other players. The accompanying growth of file-sharing services such as Napster prompted a self-destructive legal meltdown from record companies and bands who turned on their own customers and fans in an effort to protect their CD sales, instantly making an MP3 file from the internet a far cooler choice than a CD from a corporate legal bully. The arrival of Apple’s iPod brought both an easy legal online music store and the MP3 player as a desirable lifestyle accessory, and the CD began its decline. It’s ironic in 2019 that the standalone MP3 player has experienced a steeper nosedive in the face of streaming services than the CD did, while the vinyl LP somehow always maintained a diehard following and has managed a resurgence (PDF) as it is rediscovered by a new generation.

by Jenny List at March 19, 2019 02:00 PM


walking as philosophy and artistic practice

Tina Madsen has another lecture coming up this Wednesday in Aalborg, its about walking as philosophy and artistic practice (in Danish):

by herrsteiner ( at March 19, 2019 01:56 PM

March 18, 2019

Vee One Suite 0.9.6 - The Pre-LAC2019 Release Frenzy continues...

Hi all,

The Vee One Suite of old-school software instruments are here and again released for the now traditional pre-LAC2019 release frenzy.

This makes it the second batch:

All still provided in dual form, as usual:

  • a pure stand-alone JACK client with JACK-session, NSM (Non Session management) and both JACK MIDI and ALSA MIDI input support;
  • a LV2 instrument plug-in.

Changes since previous release are simple and mean:

  • A gentler shutdown for the JACK client standalone client.

The Vee One Suite are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Welcome to the party!


synthv1 - an old-school polyphonic synthesizer

synthv1 0.9.6 (pre-lac2019) is released!

synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.




git repos:


samplv1 - an old-school polyphonic sampler

samplv1 0.9.6 (pre-lac2019) is released!

samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.




git repos:


drumkv1 /a> - an old-school drum-kit sampler

drumkv1 0.9.6 (pre-lac2019) is released!

drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.




git repos:


padthv1 - an old-school polyphonic additive synthesizer

padthv1 0.9.6 (pre-lac2019) is released!

padthv1 is an old-school polyphonic additive synthesizer with stereo fx

padthv1 is based on the PADsynth algorithm by Paul Nasca, as a special variant of additive synthesis.




git repos:


Donate to

Enjoy && Keep the fun.

by rncbc at March 18, 2019 07:00 PM

March 13, 2019

News – Ubuntu Studio

Ubuntu Studio to Remain Officially Recognized Ubuntu Flavor

During a meeting of the Ubuntu Developer Membership Board on March 11, 2019, two Ubuntu Studio developers, Council Chair Erich Eickmeyer and Council Member Ross Gammon, successfully applied for and received upload rights to Ubuntu Studio’s core packages, fulfilling the requirements prescribed in We would like to thank the community for staying with us […]

by eeickmeyer at March 13, 2019 01:18 AM

March 11, 2019

The QStuff* Pre-LAC2019 Release Frenzy

Hello there!

The Qstuff* Pre-LAC2019 release frenzy is now starting up... enjoy!

Included in this batch:

All the boring details follow suit...


QjackCtl - JACK Audio Connection Kit Qt GUI Interface

QjackCtl 0.5.6 (pre-lac2019) is released!

QjackCtl is a(n ageing yet modern, not so simple anymore) Qt application to control the JACK sound server, for the Linux Audio infrastructure.


Project page:


Git repos:


  • Refactored all singleton/unique application instance setup logic away from X11/Xcb hackery.
  • At last, JACK freewheel mode is now being detected as to postpone any active patchbay scans as much as possible.
  • Removed old pre-FFADO 'freebob' driver support.
  • HiDPI display screen support (Qt >= 5.6).
  • Graph port temporary highlighting while hovering, if and only if connecting ports of same type and complementary modes.
  • Bumped copyright headers into the New Year (2019).

Donate to


Qsynth - A fluidsynth Qt GUI Interface

Qsynth 0.5.5 (pre-lac2019) is released!

Qsynth is a FluidSynth GUI front-end application written in C++ around the Qt framework using Qt Designer.


Project page:


Git repos:


  • Refactored all singleton/unique application instance setup logic away from X11/Xcb hackery.
  • HiDPI display screen support (Qt >= 5.6).
  • Bumped copyright headers into the New Year (2019).

Donate to


Qsampler - A LinuxSampler Qt GUI Interface

Qsampler 0.5.4 (pre-lac2019) is released!

Qsampler is a LinuxSampler GUI front-end application written in C++ around the Qt framework using Qt Designer.


Project page:


Git repos:


  • Refactored all singleton/unique application instance setup logic away from X11/Xcb hackery.
  • HiDPI display screen support (Qt >= 5.6).
  • Bumped copyright headers into the New Year (2019).

Donate to


QXGEdit - A Qt XG Editor

QXGEdit 0.5.3 (pre-lac2019) is released!

QXGEdit is a live XG instrument editor, specialized on editing MIDI System Exclusive files (.syx) for the Yamaha DB50XG and thus probably a baseline for many other XG devices.


Project page:


Git repos:


  • HiDPI display screen support (Qt >= 5.6).
  • Old deprecated Qt4 build support is no more.
  • AppStream metadata updated to be the most compliant with latest specification and recommendation.

Donate to


QmidiNet - A MIDI Network Gateway via UDP/IP Multicast

QmidiNet 0.5.3 (pre-lac2019) is released!

QmidiNet is a MIDI network gateway application that sends and receives MIDI data (ALSA-MIDI and JACK-MIDI) over the network, using UDP/IP multicast. Inspired by multimidicast and designed to be compatible with ipMIDI for Windows.


Project page:


Git repos:


  • HiDPI display screen support (Qt >= 5.6).
  • Old deprecated Qt4 build support is no more.
  • AppData/AppStream metadata is now settled under an all permisssive license (FSFAP); also updated to be the most compliant with latest specification and recommendation.
  • Fixed for some g++ >= 8.1.1 warnings and quietness.

Donate to



All of the Qstuff* are free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.


Enjoy && Keep having fun!

by rncbc at March 11, 2019 07:00 PM

March 07, 2019

News – Ubuntu Studio

Statement to the Community

The following is a statement about the recent activity regarding Ubuntu Studio’s status as an official flavor of Ubuntu from council chair, Erich Eickmeyer: Hello Ubuntu Studio Community, As you have probably heard by now, Ubuntu Studio’s status as an official flavor of Ubuntu was recently called into question. You can read more here: […]

by eeickmeyer at March 07, 2019 07:46 PM

March 06, 2019

Linux – CDM Create Digital Music

How to make a multitrack recording in VCV Rack modular, free

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Big thanks to chaircrusher for this tip and some other ones that informed this article – do go check his music.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

(Alternatively, if you need extra features like sends, there’s the mscHack mixer, though it’s more complex and less attractive.)

Prep that signal. You might also consider a DC Offset and Compressor between your raw sources and the recording. (Thanks to Jim Aikin for that tip.)

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

Bonus tutorial here – the other thing apart from recording you’ll obviously want with VCV Rack is some hands-on control. Here’s a nice tutorial this week on working with BeatStep Pro from Arturia (also a favorite in the hardware modular world):

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.


Step one: How to start using VCV Rack, the free modular software

How to make the free VCV Rack modular work with Ableton Link

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

by Peter Kirn at March 06, 2019 05:00 PM

March 02, 2019

KXStudio News

DPF-Plugins v1.2 released

Hello everyone, a new release of DPF-Plugins is here.
This is mostly a bugfix release, with a few little new things.
This is what changed compared to the last release:

  • Fix glBars and ProM plugins not being built and installed
  • Kars: Added release and volume parameters
  • Kars: Remove its useless UI
  • Nekobi: Add enum values for waveform parameter
  • Remove modguis, they are maintained in a separate repo

DPF changes

DPF (the small framework behind these plugins) saw some important changes.
They are not all relevant to DPF-Plugins directly, but worth mentioning:

  • Fix samplerate property in lv2 UIs
  • Fix (implement) parent window for about dialogs for MacOS and Windows
  • Add get/set scaling to Window
  • Add option to automatically scale plugin UIs
  • Allow plugin UIs to be user-resizable, test with info and meters example
  • Implement basic effGetParameterProperties in VST2 plugins (boolean, integer and log flags)
  • Implement midi out
  • Implement enumerator style of parameters
  • Implement LV2-trigger-type parameters
  • Implement Shift-click to reset sliders
  • Report supported LV2 options in generated ttl
  • Render VST2 parameter-text integer, boolean and enum parameters
  • Rework calculation of VST2 transport/time info
  • Set _NET_WM_WINDOW_TYPE for our X11 windows

Other things worth noting is that 2 new exciting things are currently under development: Cairo graphics support and AU plugin wrapper.
Eventually these will be part of core DPF, but for now they are being discussed and worked on with other developers.


The source code plus Linux, macOS and Windows binaries can be downloaded at
The plugins are released as LADSPA, DSSI, LV2, VST2 and JACK standalone.

by falkTX at March 02, 2019 09:23 PM

Carla 2.0 RC4 is here!

Hello again everyone!
This is a quick fix for the Carla Plugin Host (soon-to-be) stable series.


  • carla-vst: Add Ardour workaround for its 'Analysis' window
  • carla-vst: Fix typo leading to buffer size of 1 during plugin activation
  • Fix for some stupid plugins messing up with global signals (restore original signals after creating plugin)
  • Fix dry/wet for VST plugins (by creating extra buffer for inline processing)
  • Fix crash in RtAudio when ALSA soundcard is listed but not available
  • Fix crash on JACK buffer size changes in patchbay mode
  • Fix crash when directly loading vst shell plugins (carla-single or drag&drop dll file)
  • Fix loading multiple exported LV2 plugins (always copy full carla-plugin binary when exporting)
  • Fix missing transport information when Carla is not jack transport master
  • Fix opening a few VST2 UIs (we give up trying to follow VST spec, always assume UI opens nicely)
  • Fix plugin bridges not working under Fedora (RT threads were able to be setup, but not started)
  • Automatically terminate wine bridges if main carla dies
  • Calculate VST2 ppqPos in a more reliable way
  • Do not set up RtAudio in "hog device" mode
  • Do not build external plugins in debug or strict build
  • Handle more sources of VST2 automation via audioMasterAutomate
  • Handle worst-case scenario of carla-plugin buffer size being too low

I am not confident enough to call it the stable version just yet, as some of these release changes actually introduced new code.
But the target date for the stable release is now set - middle of April.
There are no more release-blocker bugs for Carla v2,0 anymore, so it is just a matter of time now.


To download Carla binaries or source code, jump on over to the KXStudio downloads section.
If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.

by falkTX at March 02, 2019 05:25 PM

February 28, 2019

Linux – CDM Create Digital Music

Azure Kinect promises new motion, sensing for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking for use in the dark*
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Clarification: The Kinect One uses a Time of Flight calculation in place of the Structured Light (“Light Coding”) technique of the original Kinects – a fancy way of saying that it gets its depth by measuring how long it takes for light emitted to return to the sensor, instead of projecting a bunch of dots on the subject and calculating the position. That isn’t terrifically important to the end user or developer, but it does enable the active IR technique mentioned above, and Microsoft credits it for the enhanced sensing performance in more complex scenes and larger scenes. (You can read up on that on their 2013 blog. This should also be true of the Azure Kinect, below.

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:

All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

For more on this, if you’re in Berlin, Stanislav Glazov and Valentin Tszin will show how they connect computer vision (using Kinect) as a link between choreography and music. They also recently explored these fields with techno legend Dasha Rush at CTM Festival.

Workshop:Singularity+Performance w/StanislavGlazov,ValentinTszin [Facebook]

The post Azure Kinect promises new motion, sensing for art appeared first on CDM Create Digital Music.

by Peter Kirn at February 28, 2019 11:01 PM

February 27, 2019

GStreamer News

GStreamer 1.15.2 unstable development release

The GStreamer team is pleased to announce the second development release in the unstable 1.15 release series.

The unstable 1.15 release series adds new features on top of the current stable 1.16 series and is part of the API and ABI-stable 1.x release series of the GStreamer multimedia framework.

The unstable 1.15 release series is for testing and development purposes in the lead-up to the stable 1.16 series which is scheduled for release in a few weeks time. Any newly-added API can still change until that point, although it is rare for that to happen.

Check out the draft release notes highlighting all the new features, bugfixes, performance optimizations and other important changes.

Packagers: please note that quite a few plugins and libraries have moved between modules since 1.14, so please take extra care and make sure inter-module version dependencies are such that users can only upgrade all modules in one go, instead of seeing a mix of 1.15 and 1.14 on their system.

Binaries for Android, iOS, Mac OS X and Windows will be provided shortly.

Release tarballs can be downloaded directly here:

February 27, 2019 10:00 AM

February 23, 2019

The Ardour Youtube Channel is here

@paul wrote: is pleased to announce a new youtube channel focused on videos about Ardour.

We decided to support Tobiasz “unfa” Karon in making some new videos, based on some of the work he has done in other contexts (both online and at meetings). unfa’s first video won’t be particularly useful for new or existing users, but if you’re looking for a “promotional video” that describes what Ardour is and what it can do, this may be the right thing to point people at.

In the near-term future, unfa will be back with some tutorial videos, so please consider subscribing to the channel.

Thanks to unfa for this opening video, and we look forward to more. If people have particular areas that they’d like to see covered, mention it in the comments here (or on the YT channel).

Posts: 21

Participants: 10

Read full topic

by @paul Paul Davis at February 23, 2019 06:53 PM

February 22, 2019

GStreamer News

GStreamer Rust bindings 0.13.0 release

A new version of the GStreamer Rust bindings, 0.13.0, was released.

This new release is the first to include direct support for implementing GStreamer elements and other types in Rust. Previously this was provided via a different crate.
In addition to this, the new release features many API improvements, cleanups, newly added bindings and bugfixes.

As usual this release follows the latest gtk-rs release, and a new version of GStreamer plugins written in Rust was also released.

Details can be found in the release notes for gstreamer-rs and gstreamer-rs-sys.

The code and documentation for the bindings is available on the GitLab

as well as on

If you find any bugs, missing features or other issues please report them in GitLab.

February 22, 2019 03:00 PM

February 17, 2019


EU about to destroy Internet

EU are about to change the copyright in regards of internet, I wonder if that is just killing it for consumers or also academia and research on a bigger scale.

by herrsteiner ( at February 17, 2019 08:48 PM

February 15, 2019

digital audio hacks – Hackaday

Python Script Sends Each Speaker Its Own Sound File

When it comes to audio, the number of speakers you want is usually governed by the number of tracks or channels your signal has. One for mono, two for stereo, four for quadrophonic, five or more for surround sound and so on. But all of those speakers are essentially playing different tracks from a “single” audio signal. What if you wanted a single audio device to play eight different songs simultaneously, with each song being piped to its own speaker? That’s the job [Devon Bray] was tasked with by interdisciplinary artist [Sara Dittrich] for one of her “Giant Talking Ear” installation project. He built a device to play multiple sound files on multiple output devices using off the shelf hardware and software.

But maybe a hack like this could be useful in many applications other than just art installations. It could be used in an Escape room, where you may want the various audio streams to start in synchronicity at the same time, or as part of a DJ console, sending one stream to the speakers and another to the head phones, or a game where you have to run around a room full of speakers in the right sequence and speed to listen to a full sentence for clues.

His blog post lists links for the various pieces of hardware required, although all of it is pretty generic, and the github repository hosts the code. At the heart of the project is the Sounddevice library for python. The documentation for the library is sparse, so [Bray]’s instructions are handy. His code lets you “take a directory with .wav files named in numeric order and play them over USB sound devices attached to the host computer over and over forever, looping all files once the longest one finishes”. As a bonus, he shows how to load and play sound files automatically from an attached USB drive. This lets you swap out your playlist on the Raspberry Pi without having a use a keyboard/mouse, SSH or RDP.

Check the video after the break for a quick roundup of the project.


by Anool Mahidharia at February 15, 2019 06:00 AM

February 09, 2019

Talk Unafraid

How to fail at astrophotography

This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.

So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.

So I did some research, and concluded that:

  • Astrophotography has the potential to be done cheaply but some bits do demand some investment
  • Wide-field is cheapest to do, since a telescope isn’t needed; planetary is way cheaper than deep-sky (depending on the planet) to kit out for, but to get really good planetary images is hard
  • Good telescopes are seriously expensive, but pretty good telescopes are accessibly cheap, and produce pretty good results
  • Newtonians (Dobsonians, for visual) give the absolute best aperture-to-cash return
  • Having a good mount that can track accurately is absolutely key
  • You can spend a hell of a lot of cash on this hobby if you’re not careful, and spending too little is the fastest path there…

So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.

Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.

As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.

The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.

Jupiter; the great red spot is just about visible. If you really squint you can see a few pixels that are, I swear, moons.

This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.

Working out what questions to ask

The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.

M31 Andromeda; repaired a bit in PixInsight from my original, still kinda terrible

Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…

Raw stack from an evening of longer-exposure imaging of NGC891; the noise is the sensor error. I hadn’t quite cracked image processing at this point.

It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.

My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.

M31/M110 Andromeda, wider field shot, Nikon D1x – first light, processed with DeepSkyStacker and StarTools

Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.

M33 Triangulum, D1x, processed with DeepSkyStacker and PixInsight

I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!

Stepping on up to mono

And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.

By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.

The scope and guidescope, preparing for an evening of imaging on a rare weekend clear night
M33 Triangulum; first light with the ASI183MM-PRO. A weird light leak artefact can be seen clearly in the middle of the image, near the top of the frame

Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.

The current finished imaging train – Baader MPCC coma corrector, Baader VariLock T2 spacer, ZWO mini filter wheel, ASI183MM-PRO

But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.

I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.

I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.

It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.

To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.

Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.

It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).

… okay, maybe I’d still like to get a proper flat field generator, but the “t-shirt at dusk” method works pretty well and only cost £10 for a white t-shirt

Settling in to new digs

Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.

Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.

M33 Triangulum, showing clearly now the various small nebulas and colourful objects around the main galaxy. The first image I was genuinely gleeful to produce and share as widely as I could.
The Heart Nebula, not quite centred up; the detail in the nebulosity, even with this wideband image, is helped tremendously by the pixel oversampling I achieve with my setup (0.5 arcseconds per pixel)

But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.

So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.

by James Harrison at February 09, 2019 10:03 PM

February 08, 2019

Talk Unafraid

A New Chapter

It’s been almost three years since I last wrote a real long-form blog post (past documentation of LiDAR data aside). Given that, particularly for the last two years, long-form writing has been the bulk of my day job, it’s with a wry smile I wander back to this forlorn medium. How dated it feels, in the age of Twitter and instant 140/280-character gratification! And yet such a reflection of my own mental state, in many ways.

I’ve been working at Gigaclear for about as long – three years – as my absence from blogging; this is no coincidence. My work at BBC R&D was conducted in a sufficiently calm atmosphere to permit me the occasional hobby, and the mental energy to engage with them on fair terms. I spent large chunks of it writing imageboard software; that particular project I consider a success – not only has it been taken on by others technically and organisationally, it’s now hosting almost 2 million images, 10 million comments and has around a quarter of a million users. Not too bad for something I hacked together on long coach journeys and my evenings. I tinkered with drones on the side, building a few and writing software for controlling them.

At Gigaclear – still a startup, at heart – success and survival has demanded my full attention; it is in part a function of working for an organisation that has scaled in the span of three years in staff by over 150%, in live customers by 400%, in built network by 600%. We’ve cycled senior leadership teams almost annually and gone through an investor buyout recently. It is not a calm organisation, and I am lucky (or unlucky, depending on your view) enough to have been close enough to the pointy end of things to feel some of the brunt of it. It has been an incredible few years, but not an easy few years.

I am a workaholic, and presented with an endless stream of work, I find it difficult to move on. The drones have sat idle and gathered dust; my electronics workbench in constant disarray, PCBs scattered. Even for my personal projects, I’ve written barely any code; the largest project I’ve managed lately has been a system to manage a greenhouse heater and temperature sensors (named Boothby), amounting to a few hundred lines of C and Python. My evenings have involved scrawling design diagrams and organisational charts, endless Powerpoint drafts and revisions, hundreds of pages of documentation, too much alcohol, curry, and stress. Given that part of my motivation for moving from R&D to Gigaclear was health (6 hours a day commuting into London was fairly brutal on my mental and physical health) it’s ironic that I’ve barely moved the needle on that front. Clearly, I needed something to allow me to refocus my energy at home away from work, lest work simply consume me.

A friend having a look at the moon in daylight – first light with the new telescope and mount, May 2017

As a kid – back in the late 90s – my father bought a telescope. It was what we could afford – a cheap Celestron branded Newtonian reflector tube on a manual tripod. But it was enough to see Jupiter, Saturn’s rings, and the moon. The tube is still sat in the garage – it was left outside overnight once, wet, in freezing temperatures, and the focuser was damaged in another incident, and it sits idle now, practically unusable. But it is probably part of why today I am so obsessed with space, other than the incredible engineering and beautiful science that goes into the domain. My current bedside reading is a detailed history of the Deep Space Network; a recent book on liquid propellant development is a definite recommendation for those interested in the area. Similar books litter my bookshelves, alongside space operas and books on software and companies.

M31, the Triangulum galaxy

I always felt a bit bad about ruining the telescope (because it was of course me who left it out in the rain) and proposed that for our birthday (my father and I share a birthday, making things much more convenient) we should remedy the lack of a proper telescope in the family; I had been reading various astrophotography subreddits and forums for a while and been astounded by the images terrestrial astrophotographers managed to acquire, so pitched in the bulk of the cash to get an astrophotography-quality mount, the most important bit to spend money on (I had discovered). And so we had a new telescope in the family. Nothing spectacular – a Skywatcher 200mm Newtonian reflector – but on a solid mount, a Skywatcher EQ6-R Pro. Enough to start with a little bit of astrophotography (and get some fabulous visual views on the way).

M81, Bode’s Galaxy

Of course, once one has a telescope, the natural inclination in today’s day and age is to share; and as I shared, I was encouraged to try more. And of course, I then discovered just how expensive astrophotography is as a hobby…

An early shot of Jupiter; I later opted to focus on deep-sky objects

But here it is – a new hobby, and one that I have managed to engage with with aplomb. The images in this post are all mine; they’re not perfect, but I’m proud of them. That I have discovered a love for something that taps directly into my passion for space is perhaps no surprise. Gigaclear is calming down a little as the organisation matures, but making proper time for my hobby has been helpful to settle my own nerves a little.

The scope we bought back in April of 2017; now, in Feb 2019, I think I have what I would consider a “competent” astrophotography rig for deep space objects, albeit only small ones. That particular rabbit hole is worth a few more posts, I think – and therein lies the reason why I have penned this prose.

The Heart Nebula, slightly off-piste due to a mount aiming error

Twitter is a poor medium for detailed discussion of why. Look, here’s this fabulous new filter wheel! Here’s a cool picture of a nebula! But explaining how such things are accomplished, and why I have decided to buy specific things or do particular things and the thought processes around them are not things that Twitter can accommodate. And so, the blog re-emerges.

An early shot of the core of Andromeda, before I had really realised how big Andromeda is and how narrow my field of view was… and before I got a real camera!

I’ve got a fair bit to write about (as my partner will attest to – that I can talk about her publicly is another welcome milestone since my last blog posts) and a blog feels like the right forum for it. And so I will rekindle this strange, isolated world – an entire website for one person, an absurd indulgence – to share my new renewed passion in astrophotography. Hopefully to add to a corpus the parts I feel are missing – the rich documentation of mistakes and errors, as well as celebrations of the successes.

And who knows – maybe that’ll help get my brain back on track, too. Because at the end of the day, working all day long isn’t good for your employer or for your own brain; but if you’re a workaholic, not working takes work!

by James Harrison at February 08, 2019 11:45 PM

January 10, 2019

Bug tracker updated

@paul wrote: has been upgraded from an ancient version of Mantis to the most current stable release. The website looks very different now, but all existing bug reports and user accounts are still there. We hope to find some way to limit the bug-report spam that has recently turned into a small but annoying time waster, and ultimately to enable single-sign on with the rest of

Posts: 5

Participants: 4

Read full topic

by @paul Paul Davis at January 10, 2019 06:13 PM

January 07, 2019

linux-audio « Tag Feed

Video: Aircraft battle Rosedale blaze from sky at night

The fire at Rosedale, which has burned more than 11,500ha across an 85km perimeter, started at an is

January 07, 2019 02:10 AM

December 23, 2018

Libre Music Production - Articles, Tutorials and News

Libre Music Production is taking a break

As the LMP crew right now only consists of one person (me) and we haven't published any new content for a year, I have decided to let LMP take a break.

This break might be forever. I will keep the content through 2019, at least.

If anyone would like to take over the site, please contact me:

Thank you for visiting LMP!

by admin at December 23, 2018 11:17 PM