planet.linuxaudio.org

February 17, 2019

blog4

EU about to destroy Internet

EU are about to change the copyright in regards of internet, I wonder if that is just killing it for consumers or also academia and research on a bigger scale.

https://www.eff.org/deeplinks/2019/02/final-version-eus-copyright-directive-worst-one-yet.

by herrsteiner (noreply@blogger.com) at February 17, 2019 08:48 PM

codepage Toplap 15. anniversary concert

Still going on is the Toplap 15. anniversary stream with live coding concerts from around the globe. We performed this afternoon with our project codepage and in case you missed it, you can go back in time and watch this and the other live coding concerts:


https://youtu.be/4YE9gGZq7gw?t=3496





by herrsteiner (noreply@blogger.com) at February 17, 2019 07:17 PM

February 16, 2019

News – Ubuntu Studio

Updates for February 2019

With Ubuntu 19.04’s feature freeze quickly approaching, we would like to announce the new updates coming to Ubuntu Studio 19.04. Updated Ubuntu Studio Controls This is really a bit of a bugfix for the version of Ubuntu Studio Controls that landed in 18.10. Ubuntu Studio Controls dramatically simplifies audio setup for the JACK Audio Connection […]

by eeickmeyer at February 16, 2019 08:31 PM

February 15, 2019

digital audio hacks – Hackaday

Python Script Sends Each Speaker Its Own Sound File

When it comes to audio, the number of speakers you want is usually governed by the number of tracks or channels your signal has. One for mono, two for stereo, four for quadrophonic, five or more for surround sound and so on. But all of those speakers are essentially playing different tracks from a “single” audio signal. What if you wanted a single audio device to play eight different songs simultaneously, with each song being piped to its own speaker? That’s the job [Devon Bray] was tasked with by interdisciplinary artist [Sara Dittrich] for one of her “Giant Talking Ear” installation project. He built a device to play multiple sound files on multiple output devices using off the shelf hardware and software.

But maybe a hack like this could be useful in many applications other than just art installations. It could be used in an Escape room, where you may want the various audio streams to start in synchronicity at the same time, or as part of a DJ console, sending one stream to the speakers and another to the head phones, or a game where you have to run around a room full of speakers in the right sequence and speed to listen to a full sentence for clues.

His blog post lists links for the various pieces of hardware required, although all of it is pretty generic, and the github repository hosts the code. At the heart of the project is the Sounddevice library for python. The documentation for the library is sparse, so [Bray]’s instructions are handy. His code lets you “take a directory with .wav files named in numeric order and play them over USB sound devices attached to the host computer over and over forever, looping all files once the longest one finishes”. As a bonus, he shows how to load and play sound files automatically from an attached USB drive. This lets you swap out your playlist on the Raspberry Pi without having a use a keyboard/mouse, SSH or RDP.

Check the video after the break for a quick roundup of the project.

 

by Anool Mahidharia at February 15, 2019 06:00 AM

February 14, 2019

rncbc.org

Qtractor 0.9.5 - A Valentines'19 Hotfix Release


Hello again,

Qtractor 0.9.5 (valentines'19 hotfix) is out!

Changes for this hot-fix release are as follows:

  • HiDPI display screen support (Qt >= 5.6; patch by Hubert Figuiere, thanks).
  • Fixed for DSSI plug-ins (eg. fluidsynth-dssi) loss of configuration state: clear internal config/state keys on release virtual method. (REGRESSION)
  • Fixed for NSM (and JACK) sessions not saving the correct file references/symlinks of clips that are recorded or created during the initial and scratch session.

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net
https://qtractor.sourceforge.io

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

https://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help still wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep having fun.

Donate to rncbc.org

by rncbc at February 14, 2019 07:00 PM

February 09, 2019

Talk Unafraid

How to fail at astrophotography

This is part 1 of what I hope will become a series of posts. I’m going to focus in this post on my getting started and some mistakes I made on the way.

So, back in 2017 I got a telescope. I fancied trying to do some astrophotography – I saw people getting great results without a lot of kit, and realised I could dip my toe in too. I live between a few towns, so get “class 4” skies – meaning that I could happily image a great many targets from home. I’ve spent plenty of time out at night just looking up, especially on a moonless night; the milky way is a clear band, and plenty of eyeball-visible targets look splendid.

So I did some research, and concluded that:

  • Astrophotography has the potential to be done cheaply but some bits do demand some investment
  • Wide-field is cheapest to do, since a telescope isn’t needed; planetary is way cheaper than deep-sky (depending on the planet) to kit out for, but to get really good planetary images is hard
  • Good telescopes are seriously expensive, but pretty good telescopes are accessibly cheap, and produce pretty good results
  • Newtonians (Dobsonians, for visual) give the absolute best aperture-to-cash return
  • Having a good mount that can track accurately is absolutely key
  • You can spend a hell of a lot of cash on this hobby if you’re not careful, and spending too little is the fastest path there…

So, having done my research, the then-quite-new Skywatcher EQ6-R Pro was the obvious winner for the mount. At about £1,800 it isn’t cheap, but it’s very affordable compared to some other amateur-targeted mounts (the Paramount ME will set you back £13,000, for instance) and provides comparable performance for a reasonable amount of payload – about 15kg without breaking a sweat. Mounts are all about mechanical precision and accuracy; drive electronics factor into it, of course, but much of the error in a mount comes from the gears. More expensive mounts use encoders and clever drive mechanisms to mitigate this, but the EQ6-R Pro settles for having a fairly high quality belt drive system and leaves it at that.

Already, as I write this, the more scientific reader will be asking “hang on, how are you measuring that, or comparing like-for-like?”. This is a common problem in the amateur astrophotography scene with various bits of equipment. Measurement of precision mechanics and optics often requires expensive equipment in and of itself. Take a telescope’s mirror – to measure the flatness of the surface and accuracy of the curvature requires an interferometer. Even the cheap ones cooked up by the make-your-own-telescope communities take a lot of expensive parts and require a lot of optics know-how. Measuring a mount’s movement accurately requires really accurate encoders or other ways to measure movement very precisely – again, expensive bits, etc. The net result of this is that it’s very rare that individual amateurs do quantitative evaluation of equipment – usually, you have to compare spec sheets and call it a day. The rest of the analysis comes down to forums and hearsay.

As an engineer tinkering with fibre optics on a regular basis, spec sheets are great when everyone agrees on the test methodology for the number. There’s a defined standard for how you measure insertion loss of a bare fibre, another for the mode field diameter, and so on. A whole host of different measurements in astrophotography products are done in a very ad-hoc fashion, vary between products and vendors, and so on. Sometimes the best analysis and comparison is being done by enthusiasts that get kit sent to them by vendors to compare! And so, most purchasing decisions involve an awful lot of lurking on forums.

The other problem is knowing what to look for in your comparison. Sites that sell telescopes and other bits are very good at glossing over the full complexity of an imaging system, and assume you sort of know what you’re doing. Does pixel size matter? How about quantum efficiency? Resolution? The answer is always “maybe, depends what you’re doing…”.

Jupiter; the great red spot is just about visible. If you really squint you can see a few pixels that are, I swear, moons.

This photo is one of the first I took. I had bought, with the mount, a Skywatcher 200PDS Newtonian reflector – a 200mm or 8″ aperture telescope with a dual-speed focuser and a focal length of 1000mm. The scope has an f-ratio of 5, making it a fairly “fast” scope. Fast generally translates to forgiving – lots of light means your camera can be worse. Visual use with the scope was great, and I enjoyed slewing around and looking at various objects. My copy of Turn Left at Orion got a fair bit of use. I was feeling pretty great about this whole astrophotography lark, although my images were low-res and fuzzy; I’d bought the cheapest camera I could, near enough, a ZWO ASI120MC one-shot-colour camera.

Working out what questions to ask

The first realisation that I hadn’t quite “gotten” what I needed to be thinking about came when I tried to take a photo of our nearest galaxy and was reminded that my field of view was, in fact, quite narrow. All I could get was a blurry view of the core. Long focal length, small pixel sizes, and other factors conspired to give me a tiny sliver of the sky on my computer screen.

M31 Andromeda; repaired a bit in PixInsight from my original, still kinda terrible

Not quite the classic galaxy snapshot I’d expected. And then I went and actually worked out how big Andromeda is – and it’s huge in the sky. Bigger than the moon, by quite a bit. Knowing how narrow a view of the moon I got with my scope, I considered other targets and my equipment. Clearly my camera’s tiny sensor wasn’t helping, but fixing that would be expensive. Many other targets were much dimmer, requiring long exposures – very long, given my sensor’s poor efficiency, longer than I thought I would get away with. I tried a few others, usually failing, but sometimes getting a glimmer of what could be if I could crack this…

Raw stack from an evening of longer-exposure imaging of NGC891; the noise is the sensor error. I hadn’t quite cracked image processing at this point.

It was fairly clear the camera would need an upgrade for deep space object imaging, and that particular avenue of astrophotography most appealed to me. It was also clear I had no idea what I was doing. I started reading more and more – diving into forums like Stargazer’s Lounge (in the UK) and Cloudy Nights (a broader view) and digesting threads on telescope construction, imaging sensor analysis, and processing.

My next break came from a family friend; when my father was visiting to catch up, the topic of cameras came up. My dad swears by big chunky Nikon DSLRs, and his Nikon D1x is still in active use, despite knackered batteries. This friend happened to have an old D1x, and spare batteries, no longer in use, and kindly donated the lot. With a cheap AC power adapter and F-mount adapter, I suddenly had a high resolution camera I could attach to the scope, albeit with a nearly 20-year-old sensor.

M31/M110 Andromeda, wider field shot, Nikon D1x – first light, processed with DeepSkyStacker and StarTools

Suddenly, with a bigger sensor and field of view, more pixels (nearly six megapixels) I felt I could see what I was doing – and suddenly saw a whole host of problems. The D1x was by no means perfect; it demanded long exposures at high gains to get anything, and fixed pattern noise made processing immensely challenging.

M33 Triangulum, D1x, processed with DeepSkyStacker and PixInsight

I’d previously used a host of free software to “stack” the dozens or hundreds of images I took into a single frame, and then process it. Back in 2018 I bought a copy of StarTools, which allowed me to produce some far better images but left me wanting more control over the process. And so I bit the bullet and spent £200 on PixInsight, widely regarded as being the absolute best image processing tool for astronomical imagery; aside from various Windows-specific stability issues (Linux is rock solid, happily) it’s lived up to the hype. And the hype of its learning curve/cliff – it’s one of the few software packages for which I have purchased a reference book!

Stepping on up to mono

And of course, I could never fully calibrate out the D1x’s pattern noise, nor magically improve the sensor quality. At this point I had a tantalisingly close-to-satisfying system – everything was working great. My Christmas present from family was a guidescope, where I reused the ASI120MC camera, and really long exposures were starting to be feasible. And so I took a bit of money I’d saved up, and bit the hefty bullet of buying a proper astrophotography camera for deep space observation.

By this point I had a bit of clue, and had an idea of how to figure out what it was I needed and what I might do in the future, so this was the first purchase I made that involved a few spreadsheets and some data-based decisions. But I’m not one for half-arsing solutions, which became problematic shortly thereafter.

The scope and guidescope, preparing for an evening of imaging on a rare weekend clear night
M33 Triangulum; first light with the ASI183MM-PRO. A weird light leak artefact can be seen clearly in the middle of the image, near the top of the frame

Of course, this camera introduces more complexity. Normal cameras have a Bayer matrix, meaning that pixels are assigned a colour and interpolation fills in the colour for adjacent pixels. For astrophotography, you don’t always want to image red, green or blue – you might want a narrowband view of the world, for instance, and you for various reasons want to avoid interpolation in processing and capture. So we introduce a monochrome sensor, add a filter wheel in front (electronic, for software control), and filters. The costs add up.

The current finished imaging train – Baader MPCC coma corrector, Baader VariLock T2 spacer, ZWO mini filter wheel, ASI183MM-PRO

But suddenly my images are clear enough to show the problems in the telescope. There’s optical coma in my system, not a surprise; a coma corrector is added to flatten the light reaching the filters and sensor.

I realise – by spending an evening failing to achieve focus – that backfocus is a thing, and that my coma corrector is too close to my sensor; a variable spacer gets added, and carefully measured out with some calipers.

I realise that my telescope tube is letting light in at the back – something I’d not seen before, either through luck or noise – so I get a cover laser cut to fix that.

It turns out focusing is really quite difficult to achieve accurately with my new setup and may need adjusting between filters, so I buy a cheap DC focus motor – the focuser comes to bits, I spend an evening improving the tolerances on all the contact surfaces, amending the bracket supplied with the motor, and put it back together.

To mitigate light bouncing around the focuser I dismantled the whole telescope tube and flock the interior of the scope with anti-reflective material, and add a dew shield. Amongst all this, new DC power cables and connectors were made up, an increasing pile of USB cables/hubs to and from the scope added, a new (commercial) software package added to control it all, and various other little expenses along the way – bottles of high-purity distilled water to clean mirrors, and so on.

Once you’ve got some better software in place for automating capture sessions, being able to automatically drive everything becomes more and more attractive. I had fortunately bought most of the bits to do this in dribs and drabs in the last year, so this was mostly a matter of setup and configuration.

It’s a slippery slope, all this. I think I’ve stopped on this iteration – the next step is a different telescope – but I’ve learned a hell of a lot in doing it. My budget expanded a fair bit from the initial purchase, but was manageable, and I have a working system that produces consistently useful results when clouds permit. I’ve got a lot to learn, still, about the best way to use it and what I can do with it; I also have a lot of learning to do when it comes to PixInsight and my image processing (thankfully not something I need clear skies for).

… okay, maybe I’d still like to get a proper flat field generator, but the “t-shirt at dusk” method works pretty well and only cost £10 for a white t-shirt

Settling in to new digs

Now, of course, I have a set of parts that has brought my output quality significantly up. The images I’m capturing are good enough that I’m happy sharing them widely, and even feel proud of some. I’ve even gotten some quality-of-life improvements out all this work – my evenings are mostly spent indoors, working the scope by remote control.

Astrophotography is a wonderful collision of precision engineering, optics, astronomy, and art. And I think that’s why getting “into” it and building a system is so hard – because there’s no right answer. I started writing this post as a “all the things I wish someone had told me to do” post, but really when I’m making decisions about things like the ideal pixel size of my camera I’m taking an artistic decision that is underpinned by science and engineering and maths – it has an impact on what pictures I can take, what they’ll look like, and so on.

M33 Triangulum, showing clearly now the various small nebulas and colourful objects around the main galaxy. The first image I was genuinely gleeful to produce and share as widely as I could.
The Heart Nebula, not quite centred up; the detail in the nebulosity, even with this wideband image, is helped tremendously by the pixel oversampling I achieve with my setup (0.5 arcseconds per pixel)

But there’s still value in knowing what to think about when you’re thinking about doing this stuff. This isn’t a right answer; it’s one answer. At some point I will undoubtedly go get a different telescope – not because it’s a better solution, but because it’s a different way to look at things and capture them.

So I will continue to blog about this – not least because sharing my thoughts on it is something I enjoy and it isn’t fair to continuously inflict it on my partner, patient as she is with my obsession – in the hopes that some other beginners might find it a useful journey to follow along.

by James Harrison at February 09, 2019 10:03 PM

February 08, 2019

Talk Unafraid

A New Chapter

It’s been almost three years since I last wrote a real long-form blog post (past documentation of LiDAR data aside). Given that, particularly for the last two years, long-form writing has been the bulk of my day job, it’s with a wry smile I wander back to this forlorn medium. How dated it feels, in the age of Twitter and instant 140/280-character gratification! And yet such a reflection of my own mental state, in many ways.

I’ve been working at Gigaclear for about as long – three years – as my absence from blogging; this is no coincidence. My work at BBC R&D was conducted in a sufficiently calm atmosphere to permit me the occasional hobby, and the mental energy to engage with them on fair terms. I spent large chunks of it writing imageboard software; that particular project I consider a success – not only has it been taken on by others technically and organisationally, it’s now hosting almost 2 million images, 10 million comments and has around a quarter of a million users. Not too bad for something I hacked together on long coach journeys and my evenings. I tinkered with drones on the side, building a few and writing software for controlling them.

At Gigaclear – still a startup, at heart – success and survival has demanded my full attention; it is in part a function of working for an organisation that has scaled in the span of three years in staff by over 150%, in live customers by 400%, in built network by 600%. We’ve cycled senior leadership teams almost annually and gone through an investor buyout recently. It is not a calm organisation, and I am lucky (or unlucky, depending on your view) enough to have been close enough to the pointy end of things to feel some of the brunt of it. It has been an incredible few years, but not an easy few years.

I am a workaholic, and presented with an endless stream of work, I find it difficult to move on. The drones have sat idle and gathered dust; my electronics workbench in constant disarray, PCBs scattered. Even for my personal projects, I’ve written barely any code; the largest project I’ve managed lately has been a system to manage a greenhouse heater and temperature sensors (named Boothby), amounting to a few hundred lines of C and Python. My evenings have involved scrawling design diagrams and organisational charts, endless Powerpoint drafts and revisions, hundreds of pages of documentation, too much alcohol, curry, and stress. Given that part of my motivation for moving from R&D to Gigaclear was health (6 hours a day commuting into London was fairly brutal on my mental and physical health) it’s ironic that I’ve barely moved the needle on that front. Clearly, I needed something to allow me to refocus my energy at home away from work, lest work simply consume me.

A friend having a look at the moon in daylight – first light with the new telescope and mount, May 2017

As a kid – back in the late 90s – my father bought a telescope. It was what we could afford – a cheap Celestron branded Newtonian reflector tube on a manual tripod. But it was enough to see Jupiter, Saturn’s rings, and the moon. The tube is still sat in the garage – it was left outside overnight once, wet, in freezing temperatures, and the focuser was damaged in another incident, and it sits idle now, practically unusable. But it is probably part of why today I am so obsessed with space, other than the incredible engineering and beautiful science that goes into the domain. My current bedside reading is a detailed history of the Deep Space Network; a recent book on liquid propellant development is a definite recommendation for those interested in the area. Similar books litter my bookshelves, alongside space operas and books on software and companies.

M31, the Triangulum galaxy

I always felt a bit bad about ruining the telescope (because it was of course me who left it out in the rain) and proposed that for our birthday (my father and I share a birthday, making things much more convenient) we should remedy the lack of a proper telescope in the family; I had been reading various astrophotography subreddits and forums for a while and been astounded by the images terrestrial astrophotographers managed to acquire, so pitched in the bulk of the cash to get an astrophotography-quality mount, the most important bit to spend money on (I had discovered). And so we had a new telescope in the family. Nothing spectacular – a Skywatcher 200mm Newtonian reflector – but on a solid mount, a Skywatcher EQ6-R Pro. Enough to start with a little bit of astrophotography (and get some fabulous visual views on the way).

M81, Bode’s Galaxy

Of course, once one has a telescope, the natural inclination in today’s day and age is to share; and as I shared, I was encouraged to try more. And of course, I then discovered just how expensive astrophotography is as a hobby…

An early shot of Jupiter; I later opted to focus on deep-sky objects

But here it is – a new hobby, and one that I have managed to engage with with aplomb. The images in this post are all mine; they’re not perfect, but I’m proud of them. That I have discovered a love for something that taps directly into my passion for space is perhaps no surprise. Gigaclear is calming down a little as the organisation matures, but making proper time for my hobby has been helpful to settle my own nerves a little.

The scope we bought back in April of 2017; now, in Feb 2019, I think I have what I would consider a “competent” astrophotography rig for deep space objects, albeit only small ones. That particular rabbit hole is worth a few more posts, I think – and therein lies the reason why I have penned this prose.

The Heart Nebula, slightly off-piste due to a mount aiming error

Twitter is a poor medium for detailed discussion of why. Look, here’s this fabulous new filter wheel! Here’s a cool picture of a nebula! But explaining how such things are accomplished, and why I have decided to buy specific things or do particular things and the thought processes around them are not things that Twitter can accommodate. And so, the blog re-emerges.

An early shot of the core of Andromeda, before I had really realised how big Andromeda is and how narrow my field of view was… and before I got a real camera!

I’ve got a fair bit to write about (as my partner will attest to – that I can talk about her publicly is another welcome milestone since my last blog posts) and a blog feels like the right forum for it. And so I will rekindle this strange, isolated world – an entire website for one person, an absurd indulgence – to share my new renewed passion in astrophotography. Hopefully to add to a corpus the parts I feel are missing – the rich documentation of mistakes and errors, as well as celebrations of the successes.

And who knows – maybe that’ll help get my brain back on track, too. Because at the end of the day, working all day long isn’t good for your employer or for your own brain; but if you’re a workaholic, not working takes work!

by James Harrison at February 08, 2019 11:45 PM

February 07, 2019

rncbc.org

Qtractor 0.9.4 - The Winter'19 Release


Dear all,

Qtractor 0.9.4 (winter'19 beta) is released!

Changes for this season are as follows:

  • Drag-moving and copy-pasting existing clips, while over the main track-view, now shows the respective (audio wave-shapes and MIDI piano-rolls) graphical representations, as much as possible.
  • For good and bad, session name changes now trickle down to respective audio/MIDI file names as well.
  • Audio output monitoring meters may now be shown on MIDI tracks and buses as a default user preference option (View/ Options.../Plugins/Instruments/Show audio output monitoring meters) and also in plugin list context sub-menu (Audio/Meters).
  • Custom color (palette) themes can be exported to and imported from external files.
  • LV2 plug-in UI GTK2 and X11 in Qt5 host native support are now enabled on configure by default.
  • Fixed minimum input value as 10% (was 1%) for audio clip time-stretching in the Clip / Edit... dialog.

Description:

Qtractor is an audio/MIDI multi-track sequencer application written in C++ with the Qt framework. Target platform is Linux, where the Jack Audio Connection Kit (JACK) for audio and the Advanced Linux Sound Architecture (ALSA) for MIDI are the main infrastructures to evolve as a fairly-featured Linux desktop audio workstation GUI, specially dedicated to the personal home-studio.

Website:

http://qtractor.org
http://qtractor.sourceforge.net
https://qtractor.sourceforge.io

Project page:

http://sourceforge.net/projects/qtractor

Downloads:

http://sourceforge.net/projects/qtractor/files

Git repos:

https://git.code.sf.net/p/qtractor/code
https://github.com/rncbc/qtractor.git
https://gitlab.com/rncbc/qtractor.git
https://bitbucket.org/rncbc/qtractor.git

Wiki (help wanted!):

http://sourceforge.net/p/qtractor/wiki/

License:

Qtractor is free, open-source Linux Audio software, distributed under the terms of the GNU General Public License (GPL) version 2 or later.

Enjoy && Keep the fun.

Donate to rncbc.org

by rncbc at February 07, 2019 07:00 PM

February 01, 2019

digital audio hacks – Hackaday

Those Voices in Your Head Might be Lasers

What if I told you that you can get rid of your headphones and still listen to music privately, just by shooting lasers at your ears?

The trick here is something called the photoacoustic effect. When certain materials absorb light — or any electromagnetic radiation — that is either pulsed or modulated in intensity, the material will give off a sound. Sometimes not much of a sound, but a sound. This effect is useful for spectroscopy, biomedical imaging, and the study of photosynthesis. MIT researchers are using this effect to beam sound directly into people’s ears. It could lead to devices that deliver an audio message to specific people with no hardware on the receiving end. But for now, ditching those AirPods for LaserPods remains science fiction.

There are a few mechanisms that explain the photoacoustic effect, but the simple explanation is the energy causes localized heating and cooling, the material microscopically expands and contracts, and that causes pressure changes in the sample and the surrounding air. Saying pressure waves in air is just a fancy way of explaining sound.

Demonstrating a Proof of Concept

In the case of the MIT project, a 1.9 μm thulium laser produces a beam tuned to water vapor at power levels that are not harmful to your eyes. The water vapor around your ears — and there is almost always some — absorbs the laser and generates sound. The team used both a modulated beam and a pulsed beam. In the pulsed beam case, the laser actually sweeps across your ear at the desired audio frequency.

Each method has advantages and disadvantages. Modulated light creates a higher fidelity sound. However, the sweeping technique produces louder sound and only creates the correct frequency at a certain distance from the laser. The team thinks that could be used to target specific people. Presumably, people nearby might still hear sounds, but at the wrong frequencies.

So far, this is more or less a lab demonstration. They’ve projected sound about 2.5 meters away of sufficient volume to hear with just your ear. We are guessing there are a host of practical problems to overcome to make this a workable system. Just targeting a specific person’s ear is probably nontrivial.

Nearly As Old as the Phone

Turns out the photoacoustic effect was known as early as 1880 while Alexander Graham Bell was working on his photophone which used sunlight to transmit speech. That device had a receiver that used a light sensor, but he noticed that he could produce sound waves just by hitting a solid object with pulsed sunlight and the frequency depended on the type of material. This eventually led to the spectrophone. However, with the crude sensors and light sources available, it was never really practical. Today, though, it is used for a variety of medical and biological tests.

This isn’t the first time I’ve written about MIT’s photoacoustic work. They’ve used it for spectroscopy that can detect gasses at a distance. Of course, if you are willing to allow a receiver, sending audio with a laser isn’t hard at all.

Honestly, we were a little surprised at how simple this looks and we wondered why we haven’t seen any homebrew projects that use this effect for something. Sure, a good spectroscope probably requires a tunable laser, but it would be interesting to see what kind of hobby-level projects could use gas and a laser to create sound. If you build something, be sure to tell us about it.

by Al Williams at February 01, 2019 06:01 PM

January 27, 2019

News – Ubuntu Studio

Updates for January 2019

The Ubuntu Studio team has been working on some exciting things since the release of Ubuntu Studio 18.10 back in October, and we thought we should update the community on these things. Ubuntu Studio Installer In the past, the “Ubuntu Studio Metapackage Installer” has served to allow those that choose to install metapackages in Ubuntu […]

by eeickmeyer at January 27, 2019 10:07 PM

January 21, 2019

Linux – CDM Create Digital Music

Bitwig Studio is about to deliver on a fully modular core in a DAW

Bitwig Studio may have started in the shadow of Ableton, but one of its initial promises was building a DAW that was modular from the ground up. Bitwig Studio 3 is poised to finally deliver on that promise, with “The Grid.”

Having a truly modular system inside a DAW offers some tantalizing possibilities. It means, in theory at least, you can construct whatever you want from basic building blocks. And in the very opposite of today’s age of presets, that could make your music tool feel more your own.

Oh yeah, and if there is such an engine inside your DAW, you can also count on other people building a bunch of stuff you can reuse.

Why modulaity? It doesn’t have to just be about tinkering (though that can be fun for a lot of people).

A modular setup is the very opposite of a preset mentality for music production. Experienced users of these environments (software especially, since it’s open-ended) do often find that patching exactly what they need can be more creative and inspirational. It can even save time versus the effort spent trying to whittle away at a big, monolithic tool just go get to the bit you actually want. But the traditional environments for modular development are fairly unfriendly to new users – that’s why very often people’s first encounters with Max/MSP, SuperCollider, Pd, Reaktor, and the like is in a college course. (And not everyone has access to those.) Here, you get a toolset that could prove more manageable. And then once you have a patch you like, you can still interconnect premade devices – and you can work with clips and linear arrangement to actually finish songs. With the other tools, that often means coding out the structure of your song or trying to link up to a different piece of software.

We’ve seen other DAWs go modular in different ways. There’s Apple Logic’s now mostly rarely-used Environment. FL Studio has a Patcher tool for chaining instruments and effects. There’s Reason with its rich, patchable rack and devices. There’s Sensomusic Usine, which is a fully modular DAW / audio environment, and DMX lighting and video tool – perhaps the most modular of these (even relative to Bitwig Studio and The Grid). And of course there’s Ableton Live with Max for Live, though that’s really a different animal – it’s a full patching development environment that runs inside Live via a runtime, and API and interface hooks that allow you to access its devices. The upside: Max for Live can do just about everything. The downside: it’s mostly foreign to Ableton Live (as it’s a different piece of software with its own history), and it could be too deep for someone just wanting to build an effect or instrument.

Updated: A commenter rightfully points out that I omitted MUX Modular, in MuLab. Indeed, that approach is similar – as you can read in the modular docs, you get building blocks integrated inside the DAW.

So, enter The Grid. This is really the first time a relatively conventional DAW has gotten its own, native modular environment that can build instruments and effects. And it looks like it could be accomplished in a way that feels comfortable to existing users. You get a toolset for patching your own stuff inside the DAW, and you can even mix and match signal to outboard hardware modular if that’s your thing.

And it really focuses on sound applications, too, with three devices. One is dedicated to monophonic synths, one to polyphonic synths, and one to effects.

From there, you get a fully modular setup with a modern-looking UI and 120+ modules to choose from.

They’ve done a whole lot to ease the learning curve normally associated with these environments – smoothing out some of the wrinkles that usually baffle beginners:

You can patch anything to anything, in to out. All signals are interchangeable – connect any out to any in. Most other software environments don’t work that way, which can mean a steeper learning curve. (We’ll have to see how this works in practice inside The Grid).

Any in can go to any out – reducing some of the complexity of other patching environments (software and hardware alike).

Everything’s stereo. Here’s another way of reducing complexity. Normally, you have to duplicate signals to get stereo, which can be confusing for beginners. Here, every audio cable and every control cable routes stereo.

Everything’s also in living stereo, reducing cable count and cognitive effort.

There are default patchings. Funny enough, this idea has actually been seen on hardware – there are default routings so modules automatically wire themselves if you want, via what Bitwig calls “pre-cords.” That means if you’re new to the environment, you can always plug stuff in.

They’ve also promised to make phase easier to understand, which should open up creative use of time and modulation to those who may have been intimidated by these concepts before.

“Pre-cords” mean you can easily add default patchings to get stuff working straight away.

What fun is a modular tool if you can’t explore phase? Bitwig say they’ve made this concept more accessible to modulation and easier to learn.

There’s also a big advantage to this being native to the environment – again, something you could only really say about Sensomusic Usine before now (at least as far as things that could double as DAWs).

This unlocks:

  • Nesting and layering devices alongside other Bitwig devices
  • Full support from the Open Controller API. (Wow, this is a pain the moment you put something like Reaktor into another host, too.)
  • Route modulation out of your stuff from The Grid into other Bitwig devices.
  • Complete hardware modular integration – yeah, you can mix your software with hardware as if they’re one environment. Bitwig says they’ve included “dedicated grid modules for sending any control, trigger, or pitch signal as CV Out and receiving any CV In.”

I’ve been waiting for this basically since the beginning. This is an unprecedented level of integration, where every device you see in Bitwig Studio is already based on this modular environment. Bitwig had even touted that early on, but I think they were far overzealous with letting people know about their plans. It unsurprisingly took a while to make that interface user friendly, which is why it’ll be a pleasure to try this now and see how they’ve done. But Bitwig tells us this is in fact the same engine – and that the interface “melds our twin focus on modularity and swift workflows.”

There’s also a significant dedication to signal fidelity. There’s 4X oversampling throughout. That should generally sound better, but it also has implications for control and modularity. And it’ll make modulation more powerful in synthesis, Bitwig tells CDM:

With phase, sync, and pitch inputs on most every oscillator, there are many opportunities here for complex setups. Providing this additional bandwidth keeps most any patch or experiment from audible aliasing. As an open system, this type of optimization works for the most cases without overtaxing processors.

It’s stereo only, which puts it behind some of the multichannel capabilities of Reaktor, Max, SuperCollider, and others – Max/MSP especially given its recent developments. But that could see some growth in a later release, Bitwig hints. For now, I think stereo will keep us plenty busy.

They’ve also been busy optimizing, Bitwig tells us:

This is something we worked a lot on in early development, particularly optimizing performance on the oversampled, stereo paths to align with the vector units of desktop processors. In addition, the modules are compiled at runtime for the best performance on the particular CPU in use.

That’s a big deal. I’m also excited about using this on Linux – where, by the way, you can really easily use JACK to integrate other environments like SuperCollider or live coding tools.

If you’re at NAMM, Bitwig will show The Grid as part of Bitwig Studio 3. They have a release coming in the second quarter, but we’ll sit down with them here in Berlin for a detailed closer look (minus NAMM noise in the background or jetlag)!

Oh yeah, and if you’ve got the Upgrade Plan, it’s free.

This is really about making a fully modular DAW – as opposed to the fixed multitrack tape/mixer models of the past. Bitwig have even written up an article about how they see modularity and how it’s evolved over various release versions:

BEHIND THE SCENES: MODULARITY IN BITWIG STUDIO

More on Bitwig Studio 3:

https://www.bitwig.com/en/19/bitwig-studio-3

Obligatory:

Oh yeah, also Tron: Legacy seems like a better movie with French subtitles…

That last line fits: “And the world was more beautiful than I ever dreamed – and also more dangerous … hop in bed now, come on.”

Yeah, personal life / sleep … in trouble.

The post Bitwig Studio is about to deliver on a fully modular core in a DAW appeared first on CDM Create Digital Music.

by Peter Kirn at January 21, 2019 04:21 PM

January 15, 2019

KXStudio News

Carla 2.0 RC3 is here!

Hello everyone, happy new year!
This is a quick fix for the Carla Plugin Host (soon-to-be) stable series.
Only very small fixes here, and a change on how specific plugins load.
This release starts a "release early, release often" attitude, that hopefully I can maintain from now on.

Changelog

  • Fix bridge-lv2-x11 crash when manually started from CLI
  • LV2: Don't prefer plugin bridges for certain hardcoded plugins (Calf, ir.lv2 and v1 series)
  • VST: Do not call plugin effEditIdle on update display opcode, fixing crashes for a few plugins

Previously a few plugins were hardcoded to run as plugin bridges, as they were deemed unsafe because of how they use their plugin UIs (instance-access).
Carla automatically started these plugins as bridges, as to not crash the main process when Gtk and Qt gets in the way.
Plugin state in bridges have a few issues (as plugin bridges are experimental right now), which I was hoping to fix before the final 2.0 is here.
But that will not happen it seems (not an easy fix), so now these plugins will run normally as all others do, in the same process.
This means the following possible breaking changes:

  • If v1 plugin series are compiled with a Qt version different than the one Carla is using, expect a crash on load or soon afterwards
  • Calf plugin UIs will be missing their graphs by default, unless you disable running plugin UIs in bridge mode in Carla settings

This is not an issue for other plugin UIs that use Qt or Gtk, as they do not use LV2 instance-access.
Carla runs Gtk and Qt LV2 UIs in a separate process, but because these UIs require direct access to the plugin instance, they cannot be bridged.

Downloads

To download Carla binaries or source code, jump on over to the KXStudio downloads section.
If you're using the KXStudio repositories, you can simply install "carla-git" (plus "carla-lv2" and "carla-vst" if you're so inclined).
Bug reports and feature requests are welcome! Jump on over to the Carla's Github project page for those.

Future

A "2.0-final" milestone is on GitHub, which lists the remaining issues to be fixed before 2.0 is considered "final".
New features already made its way to Carla, but sit on the develop branch.
When the "final" version is released, expect a 2.1-beta to come shortly afterwards.

by falkTX at January 15, 2019 05:41 PM

JackAss v1.1 release

This is a tiny bugfix for JackAss, a VST plugin that provides JACK-MIDI support for VST hosts.

The only change is that Wine 64bit builds work now, so you can finally load it inside 64bit Windows applications running on GNU/Linux via Wine.
Tested to work with FL Studio 20.

You can find JackAss source code and bug tracker in Github, at https://github.com/falkTX/JackAss/.

by falkTX at January 15, 2019 05:40 PM

January 10, 2019

Bug tracker updated

@paul wrote:

tracker.ardour.org has been upgraded from an ancient version of Mantis to the most current stable release. The website looks very different now, but all existing bug reports and user accounts are still there. We hope to find some way to limit the bug-report spam that has recently turned into a small but annoying time waster, and ultimately to enable single-sign on with the rest of ardour.org.

Posts: 5

Participants: 4

Read full topic

by @paul Paul Davis at January 10, 2019 06:13 PM

January 07, 2019

linux-audio « WordPress.com Tag Feed

Video: Aircraft battle Rosedale blaze from sky at night

The fire at Rosedale, which has burned more than 11,500ha across an 85km perimeter, started at an is

January 07, 2019 02:10 AM

December 23, 2018

Libre Music Production - Articles, Tutorials and News

Libre Music Production is taking a break

As the LMP crew right now only consists of one person (me) and we haven't published any new content for a year, I have decided to let LMP take a break.

This break might be forever. I will keep the content through 2019, at least.

If anyone would like to take over the site, please contact me: staffan.melin@oscillator.se.

Thank you for visiting LMP!

by admin at December 23, 2018 11:17 PM

December 20, 2018

drobilla.net - LAD

Suil 0.10.2

suil 0.10.2 has been released. Suil is a library for loading and wrapping LV2 plugin UIs. For more information, see http://drobilla.net/software/suil.

Changes:

  • Add support for Cocoa in Qt5
  • Fix resizing and add idle and update rate support for Qt5 in Gtk2
  • Fix various issues with Qt5 in Gtk2

by drobilla at December 20, 2018 05:22 PM

Linux – CDM Create Digital Music

Build your own scratch DJ controller

If DJing originated in the creative miuse and appropriation of hardware, perhaps the next wave will come from DIYers inventing new approaches. No need to wait, anyway – you can try building this scratch controller yourself.

DJWORX has done some great ongoing coverage of Andy Tait aka Rasteri. You can read a complete overview of Andy’s SC1000, a Raspberry Pi-based project with metal touch platter:

Step aside portablism — the tiny SC1000 is here

In turn, there’s also that project’s cousin, the 7″ Portable Scratcher aka 7PS.

If you’re wondering what portablism is, that’s DJs carrying portable record players around. But maybe more to the point, if you can invent new gear that fits in a DJ booth, you can experiment with DJing in new ways. (Think how much current technique is really circumscribed by the feature set of CDJs, turntables, and fairly identical DJ software.)

Or to look at it another way, you can really treat the DJ device as a musical instrument – one you can still carry around easily.

The SC1000 in Rasteri’s capable hands is exciting just to behold:

Everything you need to build this yourself – or to discover the basis for other ideas – is up on GitHub:

https://github.com/rasteri/SC1000/

This is not a beginner project. But it’s not overwhelmingly complicated, either. Basically…

Ingredients:
Custom PCB
System-on-module (the brains of the operation)
SD card
Enclosure
Jog wheel with metal capacitive touch surface and magnet
Mini fader

Free software powers the actual DJing. (It’s based on xwax, open source Linux digital vinyl emulation, which we’ve seen as the basis of other DIY projects.)

Process:

You need to assemble the main PCB – there’s your soldering iron action.

And you’ll flash the firmware (which requires a PIC programmer), plus transfer the OS to SD card.

Assembly of the jog wheel and enclosure requires a little drilling and gluing

Other than that it’s a matter of testing and connection.

Build tutorial:

Full open source under a GPLv2 license. (Andy sort of left out the hardware license – this really sort of illustrates that GNU need a license that blankets both hardware and software, though that’s complex legally. There’s no copyright information on the hardware; to be fully open it needs something like a Creative Commons license on those elements of the designs. But that’s not a big deal.)

It looks really fantastic. I definitely want to try building one of these in Berlin – will team up and let you know how it goes.

This clearly isn’t for everyone. But the reason I mention going to custom hardware is, this means both that you can adapt your own technique to a particular instrument and you can modify the way the digital DJ tool responds if you so choose. It may take some time before we see that bear fruit, but it definitely holds some potential.

Via:
Rasteri’s SC1000 scratch controller — build your own today [thanks to Mark Settle over at DJWORX!]

Project page:
https://github.com/rasteri/SC1000/

Thanks, Dubby Labby!

The post Build your own scratch DJ controller appeared first on CDM Create Digital Music.

by Peter Kirn at December 20, 2018 05:13 PM

October 09, 2018

GStreamer News

GStreamer Conference 2018: Talks Abstracts and Speakers Biographies now available

The GStreamer Conference team is pleased to announce that talk abstracts and speaker biographies are now available for this year's lineup of talks and speakers, covering again an exciting range of topics!

The GStreamer Conference 2018 will take place on 25-26 October 2018 in Edinburgh (Scotland) just after the Embedded Linux Conference Europe (ELCE).

Details about the conference and how to register can be found on the conference website.

This year's topics and speakers:

Lightning Talks:

  • gst-mfx, gst-msdk and the Intel Media SDK: an update (provisional title)
    Haihao Xiang, Intel
  • Improved flexibility and stability in GStreamer V4L2 support
    Nicolas Dufresne, Collabora
  • GstQTOverlay
    Carlos Aguero, RidgeRun
  • Documenting GStreamer
    Mathieu Duponchelle, Centricular
  • GstCUDA
    Jose Jimenez-Chavarria, RidgeRun
  • GstWebRTCBin in the real world
    Mathieu Duponchelle, Centricular
  • Servo and GStreamer
    Víctor Jáquez, Igalia
  • Interoperability between GStreamer and DirectShow
    Stéphane Cerveau, Fluendo
  • Interoperability between GStreamer and FFMPEG
    Marek Olejnik, Fluendo
  • Encrypted Media Extensions with GStreamer in WebKit
    Xabier Rodríguez Calvar, Igalia
  • DataChannels in GstWebRTC
    Matthew Waters, Centricular
  • Me TV – a journey from C and Xine to Rust and GStreamer, via D
    Russel Winder
  • GStreamer pipeline on webOS OSE
    Jimmy Ohn (온용진), LG Electronics
  • ...and many more
  • ...
  • Submit your lightning talk now!

Many thanks to our sponsors, Collabora, Pexip, Igalia, Fluendo, Facebook, Centricular and Zeiss, without whom the conference would not be possible in this form. And to Ubicast who will be recording the talks again.

Considering becoming a sponsor? Please check out our sponsor brief.

We hope to see you all in Edinburgh in October! Don't forget to register!

October 09, 2018 01:30 PM

October 02, 2018

GStreamer News

GStreamer 1.14.4 stable bug fix release

The GStreamer team is pleased to announce another bug fix release in the stable 1.14 release series of your favourite cross-platform multimedia framework!

This release only contains bugfixes and it should be safe to update from 1.14.x.

See /releases/1.14/ for the details.

Binaries for Android, iOS, Mac OS X and Windows will be available shortly.

Download tarballs directly here: gstreamer, gst-plugins-base, gst-plugins-good, gst-plugins-ugly, gst-plugins-bad, gst-libav, gst-rtsp-server, gst-python, gst-editing-services, gst-validate, gstreamer-sharp, gstreamer-vaapi, or gst-omx.

October 02, 2018 11:30 PM