For the third time and hopefully the last in the current northern estival season comes the final batch-of-one:
These are the changes for this Summer'19 release:
Wiki (help still wanted!):
Enjoy && Lots of fun.
A new version of SpectMorph, my audio morphing software, is now available on www.spectmorph.org. SpectMorph is a VST/LV2/JACK synthesis engine which is based on the idea of analyzing audio samples and combining them using morphing.
SpectMorph could always create sounds by morphing between the musical instruments bundled with SpectMorph. With this release, a new graphical instrument editor was added, which allows loading custom samples. So SpectMorph users can now create user defined instruments and morph between them.
Here is a screencast which demonstrates how to do it.
Besides this big change, the releases contains a few smaller improvements. A detailed list of changes is available here.
Finally, here is some new music made with SpectMorph:
The Vee One Suite of old-school software instruments are here released for the northern estival sesson:
All still available in dual form:
The changes for this second batch of the Qstuff* Summer'19 release series are:
synthv1 0.9.9 (summer'19) is out!
synthv1 is an old-school all-digital 4-oscillator subtractive polyphonic synthesizer with stereo fx.
samplv1 0.9.9 (summer'19) is out!
samplv1 is an old-school polyphonic sampler synthesizer with stereo fx.
drumkv1 0.9.9 (summer'19) is out!
drumkv1 is an old-school drum-kit sampler synthesizer with stereo fx.
padthv1 0.9.9 (summer'19) is out!
padthv1 is an old-school polyphonic additive synthesizer with stereo fx
Enjoy && Take fun!
Gotten a bit quiet here, hasn’t it? Well, here in the UK, it’s wonderfully sunny and bright. We don’t get proper darkness, and the planets are in an awful position, so imaging deep-space objects is a bit of a non-starter, or at least challenging. We’ve also had a run of crap weather, just to drive the point home.
I’ve been using the time instead to plan out my next astro-related project (though I may well push the execution out to 2020, just to make sure I have the cash to get it done right) – a fully automated roll-off roof observatory. The logic behind this is simple – my next “improvements” to my imaging system that I can make are:
So the biggest “bang for buck” is definitely the observatory, but only if it is fully automated. I’ve lost track of the number of nights where the sky was beautiful and clear, the clouds nowhere to be seen, ground and ambient temperatures low enough to make seeing incredibly clear – and I’ve been packing away the telescope at midnight because I’ve got work tomorrow, despite the further 7 or 8 hours of imaging I could have. And then there’s all the “well, it might be good enough, but…” nights – nights where the forecast says it won’t be good enough, but you might get lucky; often this involves going out frequently to stare at the sky, setting up if I feel optimistic, and usually being disappointed – but often not.
With a fully automated and remotely driven set-up the setup time is nil, as is the tear-down time. With the scope set up permanently, with the camera and other components mounted, there’s much more scope (no pun intended) for tweaking and tuning in advance of an imaging night, and fine tuning on cold-but-cloudy nights that just isn’t possible when you’re stripping the whole thing down each night. Being able to work in the dry and the day has a lot of appeal.
System-wise, full automation is pretty simple – you need a box with relays to drive motors and read sensors, a proper cloud/rain sensor (hard-wired to the relay box, so if any computers fail there’s a pretty dumb box responsible for shutting the roof when it rains), and a system capable of automating the selection of targets (what’s good tonight?), acquisition of images (frame the target, autofocus, guide and image), and the observatory start-up/shut-down. I’m most of the way here – I need the relay box and auto-focuser. The rest is already ready – I’ve been using INDI/Ekos/KStars for a while which can do all of this. The main INDI instance for the observatory will run on a 1U server in the observatory, with an INDI server on a Raspberry Pi 4 strapped to the telescope doing actual image acquisition and telescope equipment control. This makes the pier-to-desk cables simple – 12V for power, USB for the mount, and an Ethernet cable for the rest, with just 12V and Ethernet onto the telescope itself.
So, the objectives of this build are:
Beyond this – it’s basically a shed! So I’ve started by getting a bunch of books on shed design and construction and reading them. My day job at the moment is (mostly) telling people how to properly build a fibre optic network, so I know a reasonable amount about concrete, aggregates, rebar, admixtures and slab design. Making a good solid observatory is mostly about mass, just like in acoustic isolation design, and I’ll be using almost an entire ready-mix concrete truck worth of C40 low-moisture concrete to pour the base slab and the (isolated) pier. The framing and design of walls, floors and doors is all fairly simple, though benefits from careful planning to make sure all the services will work and the structure remains rot-and-rat free for a few decades.
The tricky bit is the roll-off roof – I need to keep this building rodent-proof and ideally near-airtight to aid in humidity/temperature control. I will use forced, filtered airflow for cooling with a positive pressure maintained to minimise dust ingress. Active cooling with the roof shut will help cool-down times and avoid any kit getting too hot in summer. This means the roof needs to seal well onto the frame when shut. I also need to be able to shut the roof at any time – that means any internal rafters need to be minimal or non-existent, so the telescope doesn’t have to be “pointed down” to let the roof pass. This means when the mount fails or is unsure of its position the roof can still shut safely to keep the rain out. The roof needs to roll back enough to give good visibility, so the whole thing has to roll onto rails that extend beyond the back of the warm room. To further improve visibility and keep rain off the rails, some of the side walls will be mounted on the roof so the walls “lower” as the roof rolls off. There’s a lot of complexity in this (and it has to be something I can build), so this is taking some time to work out.
I’ve started designing in detail in Autodesk Fusion 360 – while I’ve used Sketchup for this sort of thing in the past, Fusion 360 in Direct Modelling (non-parametric mode) is about as user-friendly and can produce much prettier outputs as well as decent engineering drawings.
I’ve also reconstructed my current telescope and mount with photogrammetry so I can build a digital model and check the motion all works – I haven’t gotten around to tidying up the mesh into some simpler models, but it’s a great reference for getting the dimensions and motion right.
The other question is where to put this – I dithered quite a bit and in the end took a lot of level photos around the garden at twilight with a Ricoh Theta S 360 degree camera at roughly my telescope’s aperture height. With the moon visible in each photo and knowing where and when I took the photos, I could align the photos to north with a fairly simple Python script which spat out a nice set of data for horizon plotting.
It turns out there’s only a few places where I don’t enjoy visibility to 30 degrees pretty much everywhere, so I decided to plug the panorama for my favoured location into Stellarium – this turns out to just involve having a panorama with a transparent sky and a small .ini file to set north properly.
The chosen location makes power and network connectivity simple enough – with 25 metres of mains cable and single-mode fibre I can connect to proper mains and Ethernet, only one switch hop away from my storage arrays.
Security is a concern – that field is adjacent to a footpath, though set back from the road, and there have been break-ins in the area. Other than making the building fairly secure against “opportunistic” crooks – reinforcing the door, lack of windows, and a solid lock – there’s not a lot that can be done. PIR sensors externally won’t work due to the abundant wildlife, so a combination of internal sensors and an alarm to make a racket if someone does force the door or climb in through the open roof will have to do. CCTV around the perimeter might work but could work just as well as an attractant as a deterrent, and wildlife would probably again make alarming impossible. I’m also planning on using a worm geared or lead screw based roof mechanism, which should be very hard to force open.
I took the view early on in this that I wanted to build this myself. I’m still not 100% sure about this, but I think it’s a reasonable project and something I should be able to do! I am budgeting for some help, though, and will have to hire kit in regardless – a mini digger for the groundwork, compactor to pack down aggregates, concrete vibrator to settle concrete in the forms, etc.
I also need planning permission. I started with a footprint that wouldn’t normally need it, so long as the building isn’t tall – but I’m in a conservation area, which means “permitted development” doesn’t really apply. I’m not concerned about getting planning permission – it’s a small building in an otherwise empty field (except for a shed we’re going to remove) and will blend in just fine. Having to go through planning permission also means I can relax around some of the limits that I’d otherwise be avoiding.
Working through the material costs there’s easily £2k, maybe £3k of materials – labour would be another £1-2k atop that if not more. That’s quite an investment, and I’m really keen to make sure that everything about this is right – giving up power to a third party feels risky. It may be that when I get the design done I sit down with some local builders that I trust and see what they say.
The first step remains the plan and design, which is taking time – but I think time invested here is time well spent. I may not start until later in the year, or even early next year – one more winter without it wouldn’t be the end of the world. It’s going to be a fun project if I can get the plan right!
Fans of domes will be wondering why I haven’t just dropped £3k on a nice big Pulsar/insert-vendor-here dome. The answer is simple:
I’ve looked at a few other dome designs and while there’s some good contenders they all have similar problems. I did consider making a “clever” geodesic dome – something I could build pretty cheaply but which would still have decent wind resistance – but automation remains the problem. Ground-level domes (where the whole structure rotates, rather than using a rotating section on a cylinder) make the construction simpler, but the bearing and rotation mechanism have to cope with increased gravity load and all of the wind loading. Cylinder-style observatories have similar problems.
The round/dodecahedral designs of these structures also make literally everything harder. Want to bolt a light to a wall? It’s not flat, so if you want it level/flat you now get to make a bracket… weatherproofing, insulation, and more all get more complicated. Having four flat walls which never move makes life simple – mounting insulation, cable entry glands, coolers, dehumidifiers, fans/filters, lights, shelves, etc is all so much simpler.
So – no dome here for now.
While we’re building a light-shielded box in a quiet location with power and networking, what else could we do? I’m also going to include infrastructure to support a small ground-level dish and motors for radioastronomy, as well as some mounts for meteor spotting cameras, an all-sky camera, and a weather station. I won’t have all this on day one, but putting a little extra concrete in now is way easier than doing it again later, and it means I can put in cable ducts to make wiring it up simpler. The cost of the pads, etc is tiny and turns those future projects from a pain into something much simpler.
A new version of the GStreamer Rust bindings, 0.14.0, was released.
Apart from updating to GStreamer 1.16, this release is mostly focussed on
adding more bindings for various APIs and general API cleanup and bugfixes.
The most notable API additions in this release are bindings for gst::Memory and gst::Allocator as well as bindings for gst_base::BaseParse and gst_video::VideoDecoder and VideoEncoder. The latter also come with support for implementing subclasses and the gst-plugins-rs module contains an video decoder and parser (for CDG), and a video encoder (for AV1) based on this.
As usual this release follows the latest gtk-rs release, and a new version of the GStreamer plugins written in Rust was also released.
The code and documentation for the bindings is available on the freedesktop.org GitLab
If you find any bugs, missing features or other issues please report them in GitLab.
The GStreamer project is happy to announce that this year's GStreamer Conference will take place on Thursday-Friday 31 October - 1 November 2019 in Lyon, France.
You can find more details about the conference on the GStreamer Conference 2019 web site.
A call for papers will be sent out in due course. Registration will open at a later time. We will announce those and any further updates on the gstreamer-announce mailing list, the website, and on Twitter.
Talk slots will be available in varying durations from 20 minutes up to 45 minutes. Whatever you're doing or planning to do with GStreamer, we'd like to hear from you!
We also plan to have sessions with short lightning talks / demos / showcase talks for those who just want to show what they've been working on or do a mini-talk instead of a full-length talk. Lightning talk slots will be allocated on a first-come-first-serve basis, so make sure to reserve your slot if you plan on giving a lightning talk.
There will also be a social event again on Thursday evening.
There are also plans to have a hackfest the weekend right after the conference on 2-3 November 2019.
We hope to see you in Lyon, and please spread the word!
The guitar ‘Toing’ sound from the ’70s was epic, and for the first time listener it was enough to get a bunch of people hooked to the likes of Aerosmith. Reverb units were all the rage back then, and for his DSP class project, [nebk] creates a reverb filter using Matlab and ports it to C++.
Digital reverb was introduced around the 1960s by Manfred Schroeder and Ben Logan. The system consists of essentially all pass filters that simply add a delay element to the input signal and by clubbing a bunch together and then feeding them to a mixer. The output is then that echoing ‘toing’ that made the ’80s love the guitar so much. [Nebk]’s take on it enlists the help of the Raspberry Pi and C++ to implement the very same thing.
In his writeup, [nebk] goes through the explaining the essentials of a filter implementation in the digital domain and how the cascaded delay units accumulate the delay to become a better sounding system. He also goes on to add an FIR low pass filter to cut off the ringing which was consequent of adding a feedback loop. [nebk] uses Matlab’s filter generation tool for the LP filter which he includes the code for. After testing the design in Simulink, he moves to writing the whole thing in C++ complete with the filter classes that allows reading of audio files and then spitting out ‘reverbed’ audio files out.
The best thing about this project is the fact that [nebk] creates filter class templates for others to play with. It allows those who are playing/working with Matlab to transition to the C++ side with a learning curve that is not as steep as the Himalayas. The project has a lot to learn from and is great for beginners to get their feet wet. The code is available on [GitHub] for those who want to give it a shot and if you are just interested in audio effects on the cheap, be sure to check out the Ikea Reverb Plate that is big and looks awesome.
Taking an old piece of gear and cramming it full of modern hardware is a very popular project. In fact, it’s one of the most common things we see here at Hackaday, especially in the Raspberry Pi era. The appeal is obvious: somebody has already done the hard work of designing and building an attractive enclosure, all you need to do is shoehorn your own gear into it. That being said, we know some of our beloved readers get upset when a vintage piece of gear gets sacrificed in the name of progress.
Thankfully, you can put your pitchforks down for this one. The vintage radio [Freshanator] cannibalized to build this Bluetooth speaker is actually a replica made to invoke the classic “cathedral” look. Granted it may still have been older than most of the people reading this right now, but at least it wasn’t actually from the 1930’s.
To start the process, [Freshanator] created a 3D model of the inside of the radio so all the components could be laid out virtually before anything was cut or fabricated. This included the design for the speaker box, which was ultimately 3D printed and then coated with a spray-on “liquid rubber” to seal it up. The upfront effort and time to design like this might be high, but it’s an excellent way to help ensure you don’t run into some roadblock halfway through the build.
Driving the speakers is a TPA3116-based amplifier board with integrated Bluetooth receiver, which has all of its buttons broken out to the front for easy access. [Freshanator] even went the extra mile and designed some labels for the front panel buttons to be made on a vinyl cutter. Unfortunately the cutter lacked the precision to make them small enough to actually go on the buttons, so they ended up getting placed above or next to them as space allowed.
The build was wrapped up with a fan installed at the peak of the front speaker grille to keep things cool. Oh, and there are lights. Because there’s always lights. In this case, some blue LEDs and strategically placed EL wire give the whole build an otherworldly glow.
If you’re interested in a having a frustrating quasi-conversation with your vintage looking audio equipment, you could always cram an Echo Dot in there instead. Though if you go that route, you can just 3D print a classic styled enclosure without incurring the wrath of the purists.
How’s that for a thrilling title? But this topic really does encapsulate a lot of what I love about astrophotography, despite the substantial annoyance it’s caused me lately…
My quest for really nice photos of galaxies has, inevitably, driven me towards narrowband imaging, which can help bring out detail in galaxies and minimise light pollution. I bought a hydrogen alpha filter not long ago – a filter that removes all of the light except from a hydrogen emission line, a deep red narrow band of light. This filter has the unfortunate side effect of reducing the total amount of light hitting the sensor, meaning that long exposures are really required to drive the signal far above the noise floor. In the single frame above, the huge glow from the right is amplifier glow – an issue with the camera that grows worse the longer my exposures. Typically, this gets removed by taking dozens of dark frames with a lens cap on and subtracting the fixed amplifier glow from the frames, a process called calibration. The end result is fairly clean – but what about these unfortunate stars?
Oblong stars are a problem – they show that the telescope failed to accurately track the target for the entire period. Each pixel in this image (and you can see pixels here, in the hot pixels that appear as noise in the close-up) equates to 0.5″ of sky (0.5 arc-seconds). This is about two to four times my seeing limit (the amount of wobble introduced by the atmosphere) on a really good night, meaning I’m over-sampling nicely (Nyquist says we should be oversampling 2x to resolve all details). My stars are oblong by a huge amount – 6-8″, if not more!
My guide system – the PHD2 software package, an ASI120MC camera and a 60mm guidescope – reported no worse than 0.5″ tracking all night, meaning I should’ve seen perfectly round stars. So what went wrong?
The most likely culprit is a slightly loose screw on my guidescope’s guiding rings, which I found after being pointed at a thing called “differential flexure” by a fantastic chap on the Stargazer’s Lounge forums (more on that later). But this is merely a quite extreme example of a real problem that can occur, and a nice insight into the tolerances and required precision of astronomical telescopes for high-resolution imaging. As I’m aiming for 0.5″ pixel accuracy, but practically won’t get better seeing than 1-2″, my guiding needs to be fairly good. The mount, with excellent guiding, is mechanically capable of 0.6-0.7″ accuracy; this is actually really great, especially for a fairly low-cost mount (<£1200). You can easily pay upwards of £10,000 for a mount, and not get much better performance.
Without guiding though it’s not terribly capable – mechanical tolerances aren’t perfect in a cheap mount, and periodic error from the rotation of worm gears creeps in. While you can program the mount to correct for this it won’t be perfect. So we have to guide the mount. While the imaging camera takes long, 5-10 minute exposures, the guiding camera takes short 3-5 second exposures and feeds software (in my case, PHD2) which tracks a star’s centre over time, using the changes in that centre to generate a correction impulse which is sent to the mount’s control software (in my case, INDI and the EQmod driver). This gets us down to the required stability over time.
The reason why my long exposures sucked, despite all this, is simple – my guide camera was not always changing its orientation as the imaging camera was. That is to say, when the mount moved a little bit, or failed to move, while the imaging camera was affected the guiding camera was not. This is called differential flexure – the difference in movement between two optical systems. Fundamentally, this is because my guidescope is a completely separate optical system to my main telescope – if it doesn’t move when my main scope does, the guiding system doesn’t know to correct! The inverse applies, too – maybe the guidescope moves and overcorrects for an imaging system that hasn’t moved at all.
With a refractor telescope, if you just secure your guidescope really well to the main telescope, all is (generally) well. That is the only practical potential source of error, outside of focuser wobble. In a Newtonian such as the one I use, though, there’s plenty of other sources. At the end of a Newtonian telescope is a large mirror – 200mm across, in my case. This is supported by a mirror cell – pinching the mirror can cause huge deviation (dozens or hundreds of nanometers, which is unacceptable), so just clamping it up isn’t practical. This means that as the telescope moves the mirror can move a little bit – not much, but enough to move the image slightly on the sensor. While moving the mount isn’t an ideal way to fix this movement – better mirror cells reduce this movement – it’s better than doing nothing at all. The secondary mirror has similar problems. The tube itself can also expand or contract, being quite large – carbon fibre tubes minimise this but are expensive. Refractors have, broadly, all their lenses securely held in place without issue and so don’t suffer these problems.
And so the answer seems to be a solution called “Off Axis Guiding”. In this system, rather than using a separate guide scope, you use a small prism inserted in the optical train (after the focuser but before the camera) to “tap” a bit of the light off – usually the sensor is a rectangle in a circular light path meaning this is pretty easy to achieve without any impact to the light that the sensor receives. This light is bounced into a camera mounted at 90 degrees to the optical train, which performs the guiding function. There are issues with this approach – you have a narrower (and hard to move) field of view, and you need a more sensitive guide camera to find stars – but the resolution is naturally far higher (0.7″ rather than 2.5″) due to the longer focal length and so the potential accuracy of guide corrections improves. But more importantly, your guiding light shares fate with the imaging light – you use the same mirrors, tube, and so on. If your imaging light shifts, so does the guiding light, optically entwined.
The off-axis guiding route is appealing, but complex. I’ll undoubtedly explore it – I want to improve my guide camera regardless, and the OAG prism is “only” £110 or thereabouts. The guide camera is the brunt of the cost – weighing in at around £500-700 for a quality high-sensitivity guide camera.
But in the immediate future my budgets don’t allow for either of these solutions and so I’ve done what I can to minimise the flexure of the guidescope relative to the main telescope. This has focused on the screws used to hold the guidescope in place – they’re really poorly machined, along with the threads in the guidescope rings, and the plastic tips can lead to flexure.
I’ve cut the tips almost back to the metal to minimise the amount of movement in compression, and used Loctite to secure two of the three screws in each ring. The coarse focus tube and helical focuser on the Primaluce guide scope also have some grub screws which I’ve adjusted – this has helped considerably in reducing the ability for the camera to move.
Hopefully that’ll help for now! I’m also going to ask a friend with access to CNC machines about machining some more solid tube rings for the guidescope; that would radically improve things, and shouldn’t cost much. However, practically the OAG route is going to be favourite for a Newtonian setup – so that’s going to be the best route in the long run.
Despite all this I managed a pretty good stab at M51, the Whirlpool Galaxy. I wasn’t suffering from differential flexure so much on these exposures – it’s probably a case of the pointing of the scope being different and so not hitting the same issue. I had two good nights of really good seeing, and captured a few hours of light. This image does well to highlight the benefits of the Newtonian setup – with a 1000mm focal length with a fast focal ratio, paired with my high-resolution camera, I can achieve some great detail in a short period of time.
Alongside my telescope debugging, I’m working on developing my observatory plans into a detailed, budgeted design – more on that later. I’ve also been tinkering with some CCDinspector-inspired Python scripts to analyse star sharpness across a large number of images and in doing so highlight any potential issues with the optical train or telescope in terms of flatness, tilt, and so on. So far this tinkering hasn’t lead anywhere interesting, which either suggests my setup is near perfect (which I’m sure it isn’t) or I’m missing something – more tinkering to be done!
After many years, Carla version 2.0.0 is finally here!
Carla is an audio plugin host, with support for many audio drivers and plugin formats.
It has some nice features like automation of parameters via MIDI CC (and send output back as MIDI too) and full OSC control.
Version 2.0 took this long because I was never truly happy with its current state, often pushing new features but not fully finishing them.
So the "solution" was to put everything that is not considered stable yet behind an experimental flag in the settings.
This way we can have our stable Carla faster, while upcoming features get developed and tagged as experimental during testing.
Preparations for version 2.1 are well under way, a beta for it will be out soon.
But that is a topic for another day.
To download Carla binaries or source code, jump on over to the
KXStudio downloads section.
Carla v2.0.0 is available pre-packaged in the KXStudio repositories and UbuntuStudio backports, plus on ArchLinux and Ubuntu since 19.04. On those you can simply install the carla package.
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.
There is no manual or quick-start guide for Carla yet, apologies for that.
But there are some videos of presentations I did regarding Carla's features and workflows, those should give you an introduction of its features and what you can do with it:
This is a small notice to everyone using Carla and JACK2 with the KXStudio repos.
First, in preparation for Carla 2.0 release, the (really) old carla package is now the new v2.0 series,
while carla-git now contains the development/latest version.
If you are not interested in testing Carla's new stuff and prefer something known to be more stable, install the carla package after the latest updates.
Second, a change in JACK2 code has made it so a restart of the server is required after the update.
(but for a good reason, as JACK2 is finally getting meta-data support; this update fixes client UUIDs)
If you use jackdbus (likely with KXStudio stuff), you will need to actually kill it.
If that does not work, good old restart is your friend. :)
One important thing to note is that the lmms package now conflicts with the carla-git one.
This is because some code has changed in latest Carla that makes v2.0 vs development/latest ABI-incompatible.
In simpler terms, LMMS can only either be compiled against the stable or development version of Carla.
The obvious choice is to use the stable version, so after the updates if you notice LMMS is missing, just install it again.
(If you have carla-git installed, carla will be installed/switched automatically)
I tried to make the transition of these updates as smooth as possible,
but note that you likely need to install updates twice to complete the process.
In other news, we got a new domain!^-^)/
Also Carla v2.0 release date has been set - 15th of April.
Unless a major issue is found, expect a release announcement on that day.
See you soon then! ;)
ardour.org is pleased to announce a new youtube channel focused on videos about Ardour.
We decided to support Tobiasz “unfa” Karon in making some new videos, based on some of the work he has done in other contexts (both online and at meetings). unfa’s first video won’t be particularly useful for new or existing users, but if you’re looking for a “promotional video” that describes what Ardour is and what it can do, this may be the right thing to point people at.
In the near-term future, unfa will be back with some tutorial videos, so please consider subscribing to the channel.
Thanks to unfa for this opening video, and we look forward to more. If people have particular areas that they’d like to see covered, mention it in the comments here (or on the YT channel).
tracker.ardour.org has been upgraded from an ancient version of Mantis to the most current stable release. The website looks very different now, but all existing bug reports and user accounts are still there. We hope to find some way to limit the bug-report spam that has recently turned into a small but annoying time waster, and ultimately to enable single-sign on with the rest of ardour.org.