When it comes to audio, the number of speakers you want is usually governed by the number of tracks or channels your signal has. One for mono, two for stereo, four for quadrophonic, five or more for surround sound and so on. But all of those speakers are essentially playing different tracks from a “single” audio signal. What if you wanted a single audio device to play eight different songs simultaneously, with each song being piped to its own speaker? That’s the job [Devon Bray] was tasked with by interdisciplinary artist [Sara Dittrich] for one of her “Giant Talking Ear” installation project. He built a device to play multiple sound files on multiple output devices using off the shelf hardware and software.
But maybe a hack like this could be useful in many applications other than just art installations. It could be used in an Escape room, where you may want the various audio streams to start in synchronicity at the same time, or as part of a DJ console, sending one stream to the speakers and another to the head phones, or a game where you have to run around a room full of speakers in the right sequence and speed to listen to a full sentence for clues.
His blog post lists links for the various pieces of hardware required, although all of it is pretty generic, and the github repository hosts the code. At the heart of the project is the Sounddevice library for python. The documentation for the library is sparse, so [Bray]’s instructions are handy. His code lets you “take a directory with .wav files named in numeric order and play them over USB sound devices attached to the host computer over and over forever, looping all files once the longest one finishes”. As a bonus, he shows how to load and play sound files automatically from an attached USB drive. This lets you swap out your playlist on the Raspberry Pi without having a use a keyboard/mouse, SSH or RDP.
Check the video after the break for a quick roundup of the project.
Changes for this hot-fix release are as follows:
Wiki (help still wanted!):
Enjoy && Keep having fun.
This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.
So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.
So I did some research, and concluded that:
So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.
Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.
As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.
The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.
This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.
The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.
Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…
It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.
My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.
Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.
I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!
And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.
By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.
Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.
But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.
I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.
I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.
It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.
To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.
Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.
It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).
Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.
Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.
But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.
So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.
It’s been almost three years since I last wrote a real long-form blog post (past documentation of LiDAR data aside). Given that, particularly for the last two years, long-form writing has been the bulk of my day job, it’s with a wry smile I wander back to this forlorn medium. How dated it feels, in the age of Twitter and instant 140/280-character gratification! And yet such a reflection of my own mental state, in many ways.
I’ve been working at Gigaclear for about as long – three years – as my absence from blogging; this is no coincidence. My work at BBC R&D was conducted in a sufficiently calm atmosphere to permit me the occasional hobby, and the mental energy to engage with them on fair terms. I spent large chunks of it writing imageboard software; that particular project I consider a success – not only has it been taken on by others technically and organisationally, it’s now hosting almost 2 million images, 10 million comments and has around a quarter of a million users. Not too bad for something I hacked together on long coach journeys and my evenings. I tinkered with drones on the side, building a few and writing software for controlling them.
At Gigaclear – still a startup, at heart – success and survival has demanded my full attention; it is in part a function of working for an organisation that has scaled in the span of three years in staff by over 150%, in live customers by 400%, in built network by 600%. We’ve cycled senior leadership teams almost annually and gone through an investor buyout recently. It is not a calm organisation, and I am lucky (or unlucky, depending on your view) enough to have been close enough to the pointy end of things to feel some of the brunt of it. It has been an incredible few years, but not an easy few years.
I am a workaholic, and presented with an endless stream of work, I find it difficult to move on. The drones have sat idle and gathered dust; my electronics workbench in constant disarray, PCBs scattered. Even for my personal projects, I’ve written barely any code; the largest project I’ve managed lately has been a system to manage a greenhouse heater and temperature sensors (named Boothby), amounting to a few hundred lines of C and Python. My evenings have involved scrawling design diagrams and organisational charts, endless Powerpoint drafts and revisions, hundreds of pages of documentation, too much alcohol, curry, and stress. Given that part of my motivation for moving from R&D to Gigaclear was health (6 hours a day commuting into London was fairly brutal on my mental and physical health) it’s ironic that I’ve barely moved the needle on that front. Clearly, I needed something to allow me to refocus my energy at home away from work, lest work simply consume me.
As a kid – back in the late 90s – my father bought a telescope. It was what we could afford – a cheap Celestron branded Newtonian reflector tube on a manual tripod. But it was enough to see Jupiter, Saturn’s rings, and the moon. The tube is still sat in the garage – it was left outside overnight once, wet, in freezing temperatures, and the focuser was damaged in another incident, and it sits idle now, practically unusable. But it is probably part of why today I am so obsessed with space, other than the incredible engineering and beautiful science that goes into the domain. My current bedside reading is a detailed history of the Deep Space Network; a recent book on liquid propellant development is a definite recommendation for those interested in the area. Similar books litter my bookshelves, alongside space operas and books on software and companies.
I always felt a bit bad about ruining the telescope (because it was of course me who left it out in the rain) and proposed that for our birthday (my father and I share a birthday, making things much more convenient) we should remedy the lack of a proper telescope in the family; I had been reading various astrophotography subreddits and forums for a while and been astounded by the images terrestrial astrophotographers managed to acquire, so pitched in the bulk of the cash to get an astrophotography-quality mount, the most important bit to spend money on (I had discovered). And so we had a new telescope in the family. Nothing spectacular – a Skywatcher 200mm Newtonian reflector – but on a solid mount, a Skywatcher EQ6-R Pro. Enough to start with a little bit of astrophotography (and get some fabulous visual views on the way).
Of course, once one has a telescope, the natural inclination in today’s day and age is to share; and as I shared, I was encouraged to try more. And of course, I then discovered just how expensive astrophotography is as a hobby…
But here it is – a new hobby, and one that I have managed to engage with with aplomb. The images in this post are all mine; they’re not perfect, but I’m proud of them. That I have discovered a love for something that taps directly into my passion for space is perhaps no surprise. Gigaclear is calming down a little as the organisation matures, but making proper time for my hobby has been helpful to settle my own nerves a little.
The scope we bought back in April of 2017; now, in Feb 2019, I think I have what I would consider a “competent” astrophotography rig for deep space objects, albeit only small ones. That particular rabbit hole is worth a few more posts, I think – and therein lies the reason why I have penned this prose.
Twitter is a poor medium for detailed discussion of why. Look, here’s this fabulous new filter wheel! Here’s a cool picture of a nebula! But explaining how such things are accomplished, and why I have decided to buy specific things or do particular things and the thought processes around them are not things that Twitter can accommodate. And so, the blog re-emerges.
I’ve got a fair bit to write about (as my partner will attest to – that I can talk about her publicly is another welcome milestone since my last blog posts) and a blog feels like the right forum for it. And so I will rekindle this strange, isolated world – an entire website for one person, an absurd indulgence – to share my new renewed passion in astrophotography. Hopefully to add to a corpus the parts I feel are missing – the rich documentation of mistakes and errors, as well as celebrations of the successes.
And who knows – maybe that’ll help get my brain back on track, too. Because at the end of the day, working all day long isn’t good for your employer or for your own brain; but if you’re a workaholic, not working takes work!
Changes for this season are as follows:
Wiki (help wanted!):
Enjoy && Keep the fun.
What if I told you that you can get rid of your headphones and still listen to music privately, just by shooting lasers at your ears?
The trick here is something called the photoacoustic effect. When certain materials absorb light — or any electromagnetic radiation — that is either pulsed or modulated in intensity, the material will give off a sound. Sometimes not much of a sound, but a sound. This effect is useful for spectroscopy, biomedical imaging, and the study of photosynthesis. MIT researchers are using this effect to beam sound directly into people’s ears. It could lead to devices that deliver an audio message to specific people with no hardware on the receiving end. But for now, ditching those AirPods for LaserPods remains science fiction.
There are a few mechanisms that explain the photoacoustic effect, but the simple explanation is the energy causes localized heating and cooling, the material microscopically expands and contracts, and that causes pressure changes in the sample and the surrounding air. Saying pressure waves in air is just a fancy way of explaining sound.
In the case of the MIT project, a 1.9 μm thulium laser produces a beam tuned to water vapor at power levels that are not harmful to your eyes. The water vapor around your ears — and there is almost always some — absorbs the laser and generates sound. The team used both a modulated beam and a pulsed beam. In the pulsed beam case, the laser actually sweeps across your ear at the desired audio frequency.
Each method has advantages and disadvantages. Modulated light creates a higher fidelity sound. However, the sweeping technique produces louder sound and only creates the correct frequency at a certain distance from the laser. The team thinks that could be used to target specific people. Presumably, people nearby might still hear sounds, but at the wrong frequencies.
So far, this is more or less a lab demonstration. They’ve projected sound about 2.5 meters away of sufficient volume to hear with just your ear. We are guessing there are a host of practical problems to overcome to make this a workable system. Just targeting a specific person’s ear is probably nontrivial.
Turns out the photoacoustic effect was known as early as 1880 while Alexander Graham Bell was working on his photophone which used sunlight to transmit speech. That device had a receiver that used a light sensor, but he noticed that he could produce sound waves just by hitting a solid object with pulsed sunlight and the frequency depended on the type of material. This eventually led to the spectrophone. However, with the crude sensors and light sources available, it was never really practical. Today, though, it is used for a variety of medical and biological tests.
This isn’t the first time I’ve written about MIT’s photoacoustic work. They’ve used it for spectroscopy that can detect gasses at a distance. Of course, if you are willing to allow a receiver, sending audio with a laser isn’t hard at all.
Honestly, we were a little surprised at how simple this looks and we wondered why we haven’t seen any homebrew projects that use this effect for something. Sure, a good spectroscope probably requires a tunable laser, but it would be interesting to see what kind of hobby-level projects could use gas and a laser to create sound. If you build something, be sure to tell us about it.
Hello everyone, happy new year!
This is a quick fix for the Carla Plugin Host (soon-to-be) stable series.
Only very small fixes here, and a change on how specific plugins load.
This release starts a "release early, release often" attitude, that hopefully I can maintain from now on.
Previously a few plugins were hardcoded to run as plugin bridges, as they were deemed unsafe because of how they use their plugin UIs (instance-access).
Carla automatically started these plugins as bridges, as to not crash the main process when Gtk and Qt gets in the way.
Plugin state in bridges have a few issues (as plugin bridges are experimental right now), which I was hoping to fix before the final 2.0 is here.
But that will not happen it seems (not an easy fix), so now these plugins will run normally as all others do, in the same process.
This means the following possible breaking changes:
This is not an issue for other plugin UIs that use Qt or Gtk, as they do not use LV2 instance-access.
Carla runs Gtk and Qt LV2 UIs in a separate process, but because these UIs require direct access to the plugin instance, they cannot be bridged.
To download Carla binaries or source code, jump on over to the KXStudio downloads section.
If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.
is on GitHub, which lists the remaining issues to be fixed before 2.0 is considered "final".
New features already made its way to Carla, but sit on the develop branch.
When the "final" version is released, expect a 2.1-beta to come shortly afterwards.
This is a tiny bugfix for JackAss, a VST plugin that provides JACK-MIDI support for VST hosts.
You can find JackAss source code and bug tracker in Github, at https://github.com/falkTX/JackAss/.
tracker.ardour.org has been upgraded from an ancient version of Mantis to the most current stable release. The website looks very different now, but all existing bug reports and user accounts are still there. We hope to find some way to limit the bug-report spam that has recently turned into a small but annoying time waster, and ultimately to enable single-sign on with the rest of ardour.org.
As the LMP crew right now only consists of one person (me) and we haven't published any new content for a year, I have decided to let LMP take a break.
This break might be forever. I will keep the content through 2019, at least.
If anyone would like to take over the site, please contact me: email@example.com.
Thank you for visiting LMP!
The GStreamer Conference team is pleased to announce that talk abstracts and speaker biographies are now available for this year's lineup of talks and speakers, covering again an exciting range of topics!
The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).
Details about the conference and how to register can be found on the conference website.
This year's topics and speakers:
Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.
Considering becoming a sponsor? Please check out our sponsor brief.
We hope to see you all in Edinburgh in October! Don't forget to register!
The GStreamer team is pleased to announce another bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!
This release only contains bugfixes and it should be safe to update from 1.14.x.
See /releases/1.14/ for the details.
Binaries for Android, iOS, Mac OS X and Windows will be available shortly.
Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.