Monthly Archives: November 2014

Once again LINN Records are announcing a “24-Bits Of Christmas” promotion, where they are offering a free high-resolution download every day from December 1st through Christmas.  The doors are opening early this year, and the first track is already available.  Check it out!

I noticed a peculiar thing the other day.  At least, I thought it was peculiar at the time.  Let me tell you all about it.

One of the tricks you can easily use to fine-tune the positioning of your loudspeakers is to move your head instead.  Moving your head from side-to-side, from back-to-front, and up-and-down, you can listen to how dependent your system’s sonic balance and imaging are with regard to listening position.  These things are not so much governed by your system as by your system’s interaction with your listening room.  This is why identical systems can sound radically different when installed in different listening rooms.

Changes in sound balance as you move your head are usually caused by the inescapable fact that sound generated by a loudspeaker driver is not uniformly distributed into the room.  Most of it propagates straight ahead out of the loudspeaker, but as you move away from the straight-ahead position the output starts to fall off.  Complicating this behaviour is the fact that as the frequency rises, so this off-centre drop-off gets progressively worse.  We refer to this phenomenon using the term ‘dispersion’, and it is a natural consequence of the fact that the loudspeaker driver is not infinitely small.  One consequence of this dispersion is that the listener’s perceived frequency balance will depend to some extent on where in the room he is located.

The perception of a good ‘holographic’ spatial image is a much more complex matter, and, if we are honest about it, is not fully understood.  The spatial image is a construct that our brains create for us, rather than a specific property of the system, and so is very much a matter that dwells within the realm of psychoacoustics.  Having said that, there are a number of things that we do know to have a positive impact on a system’s ability to generate a holographic spatial image.  Chief among those is timing coherence.  It seems that the more extreme the measures taken to improve timing coherence, the better the imaging we end up with.  The real problem arises because we cannot actually measure this ‘timing coherence’ at all.  Frankly, we can only wave our arms in attempting to define what it actually is.

The best way to rationalize timing coherence is to think of a loudspeaker.  Modern speaker design theory takes great pains to minimize cabinet resonance.  These days even the most budget-friendly designs from the better manufacturers have non-resonant cabinets that respond with a dead thud when you rap them with your knuckles, a property that was evident only on the best of high-end designs as little as 20 years ago.  A resonant cabinet will store energy and release it as sound waves a faction of a second later.  This, after all, is what you hear when rapping a cabinet with your knuckles produces a distinctive sound.  By contrast, rapping the cabinets of my B&W 802 Diamonds produces nothing more than sore knuckles.

Understanding these concepts in loudspeakers is quite simple, but extending them to electronic components is less so.  Even so, some concepts are well understood.  Removing capacitors from the signal path is one such example.  Mechanically isolating the chassis, less so.  But if you get the chance to listen to Nordost’s Sort Füts and Sort Kones it can be very instructive.

Anyway, all this is to say that if your audio components are well designed they can generate that holographic sense of image that many of us crave from our systems.  But you still need to set the system up correctly in the listening room in order to make it happen.  This is because the sound that reaches the listening position is a composite of direct sound and a combination of different reflected sounds.  If you ever get to hear a high-end loudspeaker inside an anechoic chamber – which I recognize very few of you ever will – you would be amazed as to how awful it sounds.  It will sound so dry you’ll need to take a bottle of water in there with you.  When you come out, you’ll feel like you have cotton wool in your ears.  So it is important to recognize the dominant effect of the room interaction on how your system actually sounds.

It also explains how the concept of a ‘sweet spot’ actually arises.  There is usually only one place in your listening room where the combination of direct and reflected sound comes together to generate the optimum image.  When you set up your listening room, your challenge is to make it such that this optimum spot coincides with where you place your listening chair.  Usually, when things are close to ideal, the optimum spot will move with the loudspeakers, so if it is two feet in front of your listening chair, you can correct the situation by moving the speakers two feet forward.  Or you could just move your chair.

I have one last observation to make here, and it is quite an important one.  Think about your listening chair.  If it has a high back, then reflections off the back of the chair will tend to dominate the sound field, and you may find that regardless of where you position it, you just don’t get a good image.  In general, you should always strive to use a listening chair with a low back.  In my own listening room, therefore, I have a rather stylish Italian white leather sectional sofa with a low back that comes below my shoulder line.

When a new component comes along which makes a significant change to your system, such as my new DirectStream DAC, its contribution may be such as to require a reassessment of where that optimal listening position is located.  It is quite an easy process – or at least it should be.  Sitting in your favourite listening position, you move your head from side-to-side, then back-and-forth, and finally up-and-down, until you locate the new optimal position.  You then adjust your speaker position, and/or move your listening chair, to correct for the offset.

It should be easy, but in my case it has proven not to be so.  You see, regardless of the adjustments I make, the optimum position is always about 10-12 inches higher than where I am sitting.  I have come to realize that the culprit is my much-loved sofa.  Even though its back doesn’t even come up to my shoulders, it appears that it still manages to contribute significant reflections up from its seat cushions.  Also, as I sit on it with my palms lightly touching the seat cushions, I can plainly pick up vibrations from the leather surface.  These are not at all evident if I instead place my hands on fabric cushions.  Right now I have co-opted a pair of seat cushions from another of my sofas to raise my listening position by about 10 inches.  It will do for some listening tests, but of course I now have no back support whatsoever. I have a bad back, so that is not the basis of a long-term solution.

So is my problem down to reflections from the leather surfaces, or re-radiation from the vibrating surface?  I am working on the notion of the former, because reflections tend to disrupt imaging, whereas vibrations tend to disrupt tonal neutrality, and in any case are surely too heavily damped.  For reasons of practicality (and in the interests of sustaining a 36-year marriage which is worth more than my stereo) the sofa needs to stay.  I am contemplating a solution to damp the source of these refections by judicious placement of an absorptive panel on the ceiling above the sofa.  Last year I placed one on the ceiling above and between my speakers to great effect, so I am thinking along the lines of something similar.

Meanwhile, I plan to experiment with covering the sofa’s leather surfaces with some absorptive material just to see what that does.  Such are the joys of the high end.  Your system and your room are like two top drivers on the same Formula One team.  Getting them to cooperate can be a challenge.

BitPerfect user Stefan Leckel has come up with a useful solution to the Yosemite Console Log problem.  In case you are unaware, under Yosemite, when you use BitPerfect, iTunes fills the Console Log with a stream of entries – several per second – which rapidly fills the Console Log to capacity.  At that point, the oldest messages are deleted.  In effect, this renders the Console Log pretty useless as a diagnostic tool.

Stefan’s ingenious solution is a simple script file which, in effect, sets up the Console App so that it ignores these specific messages.  However, because the script works at the system level, using it requires a level of comfort with working on OS X using tools that are capable of wreaking havoc, although hopefully the instructions below are easy enough for most people to use with a degree of comfort.  As with anything that involves tinkering at the system level, YOU USE THIS TOOL ENTIRELY AT YOUR OWN RISK, AND WITH NO EXPRESS OR IMPLIED WARRANTY.  If in doubt, channel Nancy Reagan, and “Just Say No”:)

First, you need to download a special script file which you can download by clicking here.  This will download a file called  It doesn’t matter where you place this file.  Your downloads folder would do fine.  If you are concerned about the authenticity of this file, or what it might be doing to your Mac, the contents are reproduced below for you to inspect and compare.

To use the script file, you need to first open a Terminal window.  Inside the terminal window type the following: “sudo bash ” – don’t type the quote marks, and be sure to leave a space after the bash – and DON’T press the ENTER key.  Next, drag and drop the file that you just downloaded into the Terminal window.  This action will complete the “sudo bash ” line with the full path of the file.  Now you can press ENTER.  You will be prompted to enter your system password.  Enter it (nothing will show in the Terminal as you type), and hit ENTER.

That’s it.  The Console Log should now work fine.  If you want to reset it back to how it was, just re-run the same sequence.  The same command is designed to toggle the modification on and off.

Thank you Stefan!

Below, for reference, I have reproduced the content of in full (lines shown in green are wrapped from the end of the previous line):

# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# use at your own risk, no warranty
# !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
# checks if asl.conf is already modified
set -x
cat /etc/asl.conf|grep -F “? [= Facility] [=
Sender BitPerfect] [S= Message] ignore” > /dev/null

if [ $? -eq 0 ]
    echo “removing bitperfect modification from /etc/asl.conf file”
    cat /etc/asl.conf|grep -v -F “? [= Facility] [= Sender BitPerfect] [S= Message] ignore” > /etc/asl.bitperfect
    echo “adding bitperfect modifications to /etc/asl.conf file”
    echo “? [= Facility] [= Sender BitPerfect]
[S= Message] ignore” > /etc/asl.bitperfect
    cat /etc/asl.conf >> /etc/asl.bitperfect

echo “backup /etc/asl.conf to /etc/asl.conf.bitperfect”
cp /etc/asl.conf /etc/asl.conf.bitperfect

echo “activating new config”
mv /etc/asl.bitperfect /etc/asl.conf

echo “restarting syslogd daemon”
killall syslogd

echo “done.”

Roger has been somewhat shunted unceremoniously to one side in the modern world.  We seem to have forgotten why he was ever there in the first place, and the important role he used to play.  Without Roger, our world today is a less friendly place, one in which misunderstandings are easy to come by.  Personally, I miss him, but then again I suppose I am just another old fart.

In the early day of person-to-person radio communications, Roger played a critically important role.  If you are flying an aeroplane, and you want to announce to the control tower that you’re commencing your takeoff roll, you want to be sure that the control tower is aware of that, otherwise all sorts of unpredictable outcomes could potentially result, some of them dire.  That’s where Roger comes in.  The control tower responds “Roger” and now you know your message has been received and, by extension, that the control tower knows you are rolling.  It is part of what we today recognize as a handshaking protocol, something that ensures the effectiveness of a two-way communication.  Handshaking is a tool to ensure that a message has been received, that it has been understood, and that both parties know either who is expected to speak next, or that they are agreed that the conversation is over.

When speaking to someone face-to-face, or over the telephone, there are implied cues to which we tend to adhere in order to provide this handshaking element.  These can be turns of phrase, vocal inflections, gestures, and the like.  They often vary among cultures.  How we communicate with a person has important ramifications as to how the other person perceives us, and how we in turn perceive the other person.  We may perceive that person to be brusque, friendly, rude, gregarious, or to have any of a number of attributes.  If, as a person, your inter-personal communications cause others in the world to perceive you wrongly, it is well-understood that you could have problems in your life.

Generally, it is important in our day-to-day inter-personal communications that we understand how the subtext of our communication is being received.  If you ask someone if they want to have a beer with you after work, there is world of difference between “No” and “Gee, I’m sorry, but my daughter has soccer practice”.  Most of us, when we speak with someone face-to-face or on the telephone, understand the subtext, even as we recognize that the understanding itself is sometimes in error.

Roger’s absence first became a problem with the widespread introduction of e-mail into mostly business correspondence.  If I send an e-mail inviting a colleague out for dinner when I’m in town next week, many people will find it acceptable to reply “No” in an e-mail when they really mean “Gee, I’m sorry, but I’m out of town that day”, even when they would never dream of responding with the terse “No” in a face-to-face situation.  It is part of a complex issue, one on which I don’t propose to write a treatise, but a major contributory factor is that, for most of us, it takes far longer to compose an e-mail message that properly encapsulates the subtext with which we wish to endow our response, and often we just don’t feel we have that time.  Personally, I find that excuse to be a lazy one, and if not, then disrespectful towards the recipient.

In today’s world, for many people the text message has replaced the e-mail, particularly for one-on-one conversations.  Partly by their nature, and partly due to the hardware typically used to send them, text messages tend to be terse by default.  Additionally, text message conversations tend to replace telephone conversations for many people.  They want to multi-task.  They will fit your text conversation in as they find time during the course of their day.  And so will you.  Consequently, the alternative of a phone call takes on something of the aura of an intrusion.  Which is rather frustrating, since, in the grand scheme of things, a phone conversation is always many, many times more effective.  But that too is another discussion.

This is where Roger comes in.  The lingua franca of texting is the curt message.  I don’t know about you, but I really feel the need to know that a message has been received and understood.  If I send a text that says something to the effect of “I need to see your report by the end of the day”, I feel unhappy if I don’t get a response.  It’s like if I said the same thing to that person in my office, and he just walked out without responding.  It is very clear to me how I would interpret such an action.  And shortly thereafter it would be equally clear to the other person.  What is missing is Roger.  All you need is a text response that says “Will do”.  Or even “K”, if that’s your thing.  Sadly, and frustratingly, I am finding that Roger is very much the exception rather than the rule these days.

I said earlier that inter-personal “handshaking” protocols are to a large degree cultural, and maybe that’s what’s going on here.  A texting culture is arising – or has arisen – in which subtext is no longer being conveyed at all – even emoticons, as best as I can tell, are en route to being passé.  If so, that is a counter-productive development.  I do converse with a few people routinely whose preferred mode of communication is the test message, and have done for some time.  I still find it a frustrating medium, and mainly because I dearly miss Roger, which I still give, but rarely receive.

It has been several months now since I concluded my review of the PS Audio DirectStream DAC, and a pretty positive review it was.  Since that time the unit has continued to very gradually break in, and there have also been a couple of firmware updates (all of which are available on PS Audio’s web site at no charge), each an improvement, but not sufficient to justify adding substantially to the gist of my review.  But recently there has been a major firmware update – version 1.2.1 is its designation – which is a sufficient game-changer that it warrants a whole update of its own.

So why should something as perfunctory as firmware change the sound of a DAC?  Normally when we think of firmware updates we think of functionality rather than performance.  And indeed there are functionality issues which are addressed here – the DirectStream now fully supports 24/352.8 PCM, which it did not do with the original firmware.  But in a DAC in particular, a large part of what it does performance-wise lies in the processing of the incoming data in the digital domain, and those processes are often under the control of the firmware.  Particularly in the DirectStream, where all that processing happens on in-house PS Audio firmware rather than within the proprietary workings of a third-party chipset.  What goes on under the aegis of its firmware is to a large degree the heart and soul of what the DirectStream is all about.

I have communicated at length with Ted Smith, designer of the DirectStream, about the nature and effect of the changes he has made.  I’m not sure how open those discussion were intended to be, and so I will not share them in detail with you, but there are two areas in which his attention has been primarily focussed.  The first is on the digital filters, and how their optimal implementation is found to affect jitter, something which initially surprised me.  The second is on the Delta-Sigma Modulators which generate the single-bit output stage, always an area ripe for improvement, in which Ted has reined in the attack dogs which stand guard to protect against the onslaught of instabilities.  Together, the effect of these significant updates has been transformative, and that is not a word I use lightly.

The simple description of the sound of the 1.2.1 firmware update is that it has opened up the sound.  Everything has more space and air around it.  Sonic textures have acquired a more tactile quality.  The music just communicates more freely.  It would be easy to sit back and characterize the sound as more “Analog-” or “Tube-” like.  These are words the audiophile community likes to use as currency for sound that is quite simply easy to listen to.  It is interesting that we audiophiles admire and value attributes such as sonic transparency, detailed resolving power, and dynamic response, and yet how often is it that when we are able to bring them together the result is painfully unlistenable?  It is these Yin and Yang elements that are foremost in my mind as I listen to the 1.2.1 version of the DirectStream.

So, without further ado, what am I listening to, and how is it sounding?

First up is “Bending New Corners” by the French jazz trumpeter Erik Truffaz.  This is a curious infusion of early-‘70s Miles Davis, ambient groove jazz, and trip-hop, which brings to mind the sort of music that might have deafened you in the über-trendy restaurant scene of the 1990’s.  I first heard it on LP at the Montreal high-end dealer Coup de Foudre, and today I’m playing the CD version.  The mix is a relatively simple one involving trumpet, bass, keyboards and drums, plus the occasional vocal stylings of a rapper called ‘Nya’.  The music is set in an atmospheric ambient, and is quite simple in its sonic palette, but nevertheless I have always had trouble separating out the individual instruments.  I was keen to know what the additional resolving power of the 1.2.1 DS would make of it.

What the additional clarity brought was the realization that I have been hearing the limits of this recording all along.  The trumpet has a very rich spectrum of harmonics which overlay most of the audio spectrum, and when it plays as a prominent solo instrument those harmonics can intermodulate with the sounds of many other instruments, making it difficult to hear through the trumpet and follow the rest of the mix.  If the intermodulation is baked into the recording, then no degree of fidelity in the playback chain is going to solve that problem.  This is what I am plainly hearing with the 1.2.1 DS.  This recording, far from being a clean and atmospheric gem waiting for an extraordinary DAC to liberate its charms, is a bit of a digital creation.  The extraordinary DAC instead reveals its ambience as a digital artifact.  The lead trumpet and vocals can be heard to have a processed presence about them.

Once you have heard something, you can never “un-hear” it again.  It’s a bit like skiing, in that once you’ve mastered it, it becomes impossible to ski like you did when you were still learning.  At best, all you’ll manage is a caricature of a person skiing like a novice.  I can now go back to the CD of “Bending New Corners” on a lesser system and will recognize its flaws for what they are, even though previously I would have interpreted what I was hearing differently.

My experience with Bending New Corners was to be repeated many times.  As I type, I am listening to Ravel’s Bolero with the Minnesota Orchestra conducted by Eiji Oue on Reference Recordings, ripped from CD.  It begins with a pianissimo snare drum some 20 feet behind the speakers and slightly to the right of center.  This recording has always been one of which I have thought highly.  The solo pianissimo snare is a good test for system imaging.  However, I now hear the snare as living in a slightly smeared space.  I perceive its sonic texture differently – more plausibly accurate if you will (a layer of sonic mush hovering around the instrument itself has evaporated away like the early mist on a spring morning) – but I somehow cannot place the image more accurately than a few feet.  I surmise that, because my brain is more confident that it is hearing the sound of a pianissimo snare drum, it therefore also expects to hear that sound more accurately localized in space.  But it is unable to do that.  As a consequence, although I never previously thought that the stereo image was wanting, I now appreciate that in fact it is, and I wonder how a higher-resolution version of this recording would compare.

Here is a song my wife likes.  It is “Hollow Talk” from the CD “This is for the White in your Eyes” by the Danish band Choir of Young Believers.  My wife had me track it down because it is the theme tune on a Danish/Swedish TV show we have been watching on Netflix called The Bridge (Bron/Broen).  It is another example of how the DS 1.2.1 can render a studio’s clumsy machinations clearly manifest.  The echo applied to the vocal adds atmospherics but is just unnatural.  As the track proceeds, the production gets layered on and layered on – and then layered on some more.  The effect is all very nice when heard on TV, but on my reference system driven by the DS 1.2.1 it just calls out for a lighter touch.  For example, at the beginning I heard a faint sound off to the left like someone getting into or out of a car and closing the door.  I don’t see why they wanted to include that – I can’t imagine it is particularly audible unless you have a highly resolving system such as a DS 1.2.1, one which makes clear the dog’s breakfast nature of the recording.

Next up is another old favourite of mine, “Unorthodox Behaviour” by 1970’s fusion combo Brand X.  I saw the band live at Ronnie Scott’s club in London back in 1975 (or thereabouts) and bought the album on LP as soon as it came out.  Today, I’m playing a 24/96 needle-drop.  I just love the opening track, Nuclear Burn.  Percy Jones’ bass lick is original and memorable, and extremely demanding of technique.  DirectStream 1.2.1 lets me hear the bass line more clearly than I have ever heard it before.  I had always thought it to have a slightly muddy texture – not surprising, given that playing it would tie most people’s fingers into inextricable knots – but now I hear just how extraordinarily skilled Jones’ bass chops really were.  And below it, Phil Collins’ kick drum has acquired real weight.  Not that it sounds any louder, or deeper.  It is more like the pedal mechanism has had an extra 5lb of lead affixed to it.

Now to a lonely corner of your local music store, where the Jazz, Folk, and Country aisles peter out.  This is where you’ll find Bill Frisell’s 2000 CD “Ghost Town”, a finely recorded ensemble of mostly acoustic guitar and banjo music with Frisell playing all the instruments.  Despite the album’s soulful and contemplative mood, due at least in part to the sparse arrangements and absence of a drum track, I keep expecting it to break out suddenly into ‘Duelling Banjos’.  The track list comprises mostly Frisell original compositions together with a handful of well-chosen covers.  Apart from enjoying the music, the idea here is to play Spot The Guitar.  On a rudimentary system this involves telling which are the guitars and which the banjos.  As the system gets better, you start to be able to tell how many different models of each instrument are being played.  With the DS 1.2.1 I suspect you could go further and identify the brands (Klein, Anderson, Martin, etc.).  Me, I’m not a guitar head, and can’t do that (although, back in the day, I used to be able to reliably tell a Strat from a Les Paul, even on the most rudimentary systems), but I do hear the different tonalities and sonorities very clearly.

Gil Scott-Heron is credited in some circles as being the father of rap.  He was a soulful yet extremely cerebral poet-musician with a strong sense of a social message.  His 1994 CD “Spirits” was a bit of a swan song, and contains a track “Work for Peace” which is a political rant against the ‘military and the monetary‘, who ‘get together whenever it’s necessary‘.  I kinda like it – it is, I imagine, great doper music … yeah, man.  But the mostly spoken voice is very soulfully and plausibly captured.  You can imagine the man himself, in the room with you.  I would just love to hear the original master tape transferred to DSD.

I Remember Miles” is a 1998 CD from Shirley Horn.  It’s a terrific recording, and won the Grammy for Best Jazz Vocal Performance.  But really, it is an all-round wonderful album.  And the standout track is an absolute classic 10-minute workout of Gershwin’s “My Man’s Gone Now” from Porgy and Bess.  It begins with Ron Carter’s stunning, ripely textured, ostinato-like bass riff which underpins the track.  It has always sounded to me like two basses – one electric and one acoustic – but with the latest DS 1.2.1 the electric bass tones now sound more and more like an expertly played and finely recorded acoustic bass, and in addition I’m beginning to think there’s just the one bass – perhaps even double-tracked.  I’d love to know what you think.  Aside from the tasty bass, the rest of the recording is revealed to have a smooth but slightly congested, slightly coloured sound, a bit like what I hear when I listen to SETs played through horn speakers (I know, I know, heresy.  Kill me now.).  The immediacy and sheer presence of a fine DSD recording is just not there.  Unfortunately, this has not been released on SACD either.  Perhaps a DSD remaster will finally put the bass conundrum to bed?

Which brings me to the nub of this review.  Finally, the DirectStream is delivering on its huge promise as a DSD reference.  With the 1.2.1 firmware, it is opening up a clear gap between its performance with DSD and PCM source material, along the exact same lines as my previous experience with the Light Harmonic Da Vinci Dual DAC.  The DSD playback just adds that extra edge of organic reality to the sound.  It just sounds that little bit more like the actual performer in the room with you.  Sure, CD sounds great on it – probably as good as I’ve ever heard it sound – but the DS 1.2.1 consistently shows CD at its limits.  Great sound requires more than CD can deliver across the board, and in my view the DS 1.2.1 – through its excellent performance – makes this about as clear as it’s ever going to be.

In Part II of my review I mentioned the CD of Acoustic Live by Nils Lofgren.  I recently came across a SACD of music from the TV series “The Sopranos”, and it contains “Black Books” from the Lofgren album.  The CD is a pretty special recording, but the DSD ripped from the SACD just blows it clean out of the water, if you can imagine such a thing.  The vocal has incredible in-the-room-in-front-of-you presence.  All of the acoustics, which were already pretty open, really open up.  The pair of tom-toms I mentioned take on individual tonality, texture, and weight.  And the guitar work, which I previously characterized as being ‘aggressively picked’ comes across with a much more natural and plausible sound.  You just cannot go back to the CD and hear it the same way.  DAMN!  Someone needs to release this whole album on SACD, and preferably as a DSD download.

Another great SACD is MFSL’s remastering of Stevie Ray Vaughan’s “Couldn’t Stand The Weather”, with its perennial audiophile favourite “Tin Pan Alley”.  Beginning with a solid kick drum thwack, it launches into a cool, laid-back, 12-bar blues.  Vaughan’s guitar has just the right combination of restraint and blistering finger work, and his vocal is very present and stable, just to the left of centre.  The rhythm section lays down a fine metronomic beat, playing the appropriate foundational role upon which SRV builds his performance.  By contrast, in their uncomplicated take on Hendrix’s “Voodoo Chile”, the drums are given full rein to pound out a tight and impactful rhythm, and SRV gives his guitar hero chops a good airing.  If you’re unfamiliar with SRV and want to know what the man was about, this would be the place to start.  It is a fantastic recording, and one that has been expertly transferred to SACD.

The Japanese Universal Music Group has remastered and released many classic albums in their SHM-SACD series, all of which are both hard to come by outside of Japan, and ruinously expensive.  Their work on Dire Straits’ “Brothers in Arms” is interesting.  To the best of my knowledge the original recording was on 20-bit 44.1kHz digital tape (but there are people around that know more than me about those things).  Anyway, the fact is that there is no obvious reason why a remastered SACD should sound significantly better than the original CD, unless, of course, the latter was not well mastered.  However, the conventional wisdom is that Mark Knopfler was particularly anal about the recording and mastering quality, and so maybe that argument doesn’t hold water.  Additionally, the Universal SHM-SACD can be compared with a contemporary remastering by MFSL, and both can be compared to the original CD.

Right away, both SACDs come across as superior to the CD in all the important ways.  The title track, Brothers in Arms, is one of my all-time go-to tracks.  On both remasterings, with the DS 1.2.1 the vocal has that signature SACD presence, and Knofler’s guitar work sounds more organic, more like a real instrument in the room with you – just like with the Nils Lofgren.  I puzzled over how and why two SACD remasters from impeccable digital sources could sound different.  But they do, and maybe someone could enlighten me about that.  The two remasters sound almost stereotypical (there’s gotta be a pun in there somewhere) of how we think of Japanese and American musical tastes.  The Japanese SHM-SACD is massively detailed, but with slightly flat tonal and spatial perspectives compared to the American MFSL.  The latter’s tonal bloom fills the acoustic space in a more immediately appealing manner, but at the apparent cost of some of that delicious detail.  If one is right, then the other must be wrong, so they say.  You pays your money, and you takes your choice.  But the bottom-line is that with a DAC of the resolving power of the DS 1.2.1 considerations such as these are going to weigh more heavily than might otherwise be the case.

So there you have it.  The 1.2.1 firmware update will transform your DirectStream from a great product into a game-changing product.  I concluded my last review by comparing the DirectStream, with its original firmware, to my all-time reference, the Light Harmonic Da Vinci Dual DAC.  I felt that, based on my aural memory, since I no longer have the Da Vinci to hand, that the DirectStream was not quite up to the latter’s lofty standards.  With the 1.2.1 firmware I am no longer so sure about that.  I would need to have both DACs side-by-side in order to be certain.  But this time around my aural memory tells me that the DirectStream in its 1.2.1 incarnation could very well give the Da Vinci a good run for its money.  And in some areas, such as its bass performance, I even wonder if the DirectStream might not come out on top.  Let’s bear in mind the price difference – $6k vs $31k.  That’s an extraordinary achievement.

Now that Tim’s Vermeer is out of the way, here, finally, is my promised AirPlay update.  You will recall that, following my system-wide upgrade to Yosemite and iTunes 12.x, when I set about evaluating BitPerfect’s AirPlay behaviour under that configuration it started out looking very bleak.  Nothing seemed to want to work, and there seemed to be a number of different and quite independent failure modes.  At the same time I was running short of patience with my AppleTV for non-audio reasons, and eventually discovered that it was one of a number of early ATV3 units which was eligible for free replacement under an Apple program.  With the AppleTV removed from my network, AirPlay suddenly started working very well, very predictably, and very stably.  Now that I have my replacement ATV3 from Apple, the question is what would happen when I re-installed it?

The answer is good news.  It has had no adverse impact whatsoever on my AirPlay setup.  So I set about devising a torture test, one I have never tried before.  I have three Macs in my test network, all running Yosemite and iTunes 12 – the first is a 2013 Mac Mini, the second a 2014 RMBP, and the third a 2009 MBP.  I also have three AirPlay devices in my network – the first is my Classe CP-800 (which has an ethernet-connected AirPlay interface built in) which is connected to my main reference system, the second is an Airport Express connected to a set of computer speakers, and the third is my new AppleTV3, connected to a TV set.

All three Macs are running BitPerfect 2.0.1 straight from the App Store, and all three are playing through AirPlay.  The Mac Mini is playing Sade’s “Diamond Life” to the CP-800, the MBP is playing Laura Mvula’s “Live with Metropole Orkest” to the AirPort Express, and the RMBP is playing AC/DC’s “Back in Black” to the AppleTV.  All three are playing music quite happily, simultaneously, and the music choices are ones which my brain can at least separate out from the cacophony.  There have been no dropouts or other problems as far as I can tell (it is not easy listening to three systems simultaneously!).  So what had the potential to be a metaphorical headache is now instead a physical one, but for the time being I am not too unhappy about it 🙂

Next, I decided to stress the system to breaking point.  Although BitPerfect “hogs” its audio output device, thereby preventing other Apps – including OS X itself – from accessing it, this is not so simple with AirPlay.  BitPerfect can only hog AirPlay’s standard audio interface, but OS X does not control AirPlay through the standard audio interface.  So, even while BitPerfect is hogging it, you can still access the AirPlay subsystem via OS X’s Audio Midi setup.  So what would happen if I messed with the AirPlay settings in Audio Midi Setup while BitPerfect was busy playing through it?  And suppose I did that simultaneously with all three systems, while each one was busy playing?  Ugh.

So, on my Mac Mini I changed the AirPlay device in Audio Midi Setup from the CP-800 to the AppleTV.  Nothing happened.  BitPerfect continued playing to the CP-800, and the RMBP continued playing to the AppleTV.  So then I changed the RMBP’s AirPlay device from AppleTV to Airport Express, and the MBP’s from AirPort Express to CP-800.  Now, each of my Macs has its AirPlay device set to a different one from the one which BitPerfect is playing through, but BitPerfect’s playback has continued unchanged.  It is as though BitPerfect’s “hog” on the AirPlay device seems to have a lot more teeth to it that was previously the case under Mavericks and Mountain Lion.  This is a good thing, although the net result would be seriously confusing to someone who came in off the street right now and set about inspecting my setup.

Finally, the playlist playing on each of the systems has moved on to the next album in the queue.  In each case I have selected a 24/176.4 album so that (i) BitPerfect is doing some extra work to downsample the incoming signal, and (ii) the WiFi network is now being more seriously challenged.  The Wi-Fi network now has to stream 24/176.4 music into two of the three computers (the audio files live on a NAS), and then stream a 16/44.1 AirPlay stream out of the computers and then into the AirPlay devices.  That’s two 24/176.4 streams and four 16/44.1 streams simultaneously.  The third computer, and the third AirPlay device, are both connected via ethernet.  Everything continues to play just fine.  Credit here, to be fair, must go to my trusty Cisco E4200 router.

And still my headache continues.  Eric Clapton’s “Slowhand”, Deep Purple’s “Machine Head”, and Arkady Volodos reciting Chopin, now permeate the soundscape.  Excuse me while I take a couple of Tylenol….

Yesterday I wrote about Tim Jenison and his cool research on the topic of Vermeer and the photo-realism of the Dutch School, captured in a wonderful documentary called “Tim’s Vermeer” from Sony Classics.  I mentioned the extraordinary story of how Jenison constructed a plausible apparatus by which, he posited, Vermeer may have actually produced his revolutionary paintings.  Jenison, a graphics designer and non-artist, went further and used it to produce his own version of Vermeer’s “The Music Lesson”, something which, had he done it in 1650, might conceivably have elevated the name Jenison into the pantheon of the greats.

Jenison’s technique in effect had him work his way across the canvas, comparing each fragment of the painting with the corresponding fragment of an image of his subject produced by a camera obscura.  In essence, you consider a spot on your canvas, and compare what the obscura image suggests should go there with what is already there.  If you perceive a discrepancy, you can easily correct it.  Or modify it later if, maybe as a result of what you put in an adjacent area, you come up with something better – in contrast to, for example, how an ink-jet printer might approach the same job.  “The Music Lesson” is about two feet square, and it took Jenison something like 130 days to complete the painting.  The unbreaking concentration required was enormous, and in turn nearly broke him.

While watching all this I was immediately struck by an interesting comparison with the Sigma Delta Modulator (SDM) used to produce a DSD data stream.  The SDM takes some input data, which may be an analog signal or a digital data stream, and sets about producing an output data stream.  Each time it needs to create an output value it sets about comparing two things – the input value, plus the input values that preceded it, and the previous output values.  It uses those to calculate what the new output value should be. 

This is like Jenison’s apparatus.  He goes to a place on the canvas, looks what’s there, and compares that to what’s in the equivalent place on the original image.  He then uses his judgement to decide what he needs to paint in that particular place.  In a practical sense, it’s not that he seeks something right or wrong in absolute terms about the smudge of paint that needs to be applied, more a question of what looks best given the available comparables.  “Using his judgement” is a convenient phrase which undervalues the colossal amount of visual processing power that the brain is able to bring to bear on the task.

This illuminates one of the limitations of a true SDM in a DSD application.  DSD requires that the output of the SDM has to be either 1 or 0.  If, on balance, the SDM figures out that the best output value is actually 0.5, DSD doesn’t have that as an option.  It has to choose either 1 or 0.  The SDM architecture only allows us to look historically at both input and output data and use that to make our choice.  If both 1 and 0 are equally wrong, then it does’t matter which one we choose.  We just hope that the SDM can take the error fully into account when it comes to choosing the next output value, and the ones after that.  In fact the situation is always like that.  The SDM, in reality, always figures out a best output value somewhere between 0 and 1, and never comes up with an output value which is either exactly 1 or exactly 0.  And, unlike Jenison, the SDM doesn’t get to go back and do a make-over once it’s made its choice.  In that regard, the SDM is a bit like the ink-jet printer analogy.

Given that the ideal output value is never 1 or 0, and that we have to pick one or the other and hope for the best, what do we do if it turns out that we’d have been better off choosing the other one?  The answer is that, in the grand scheme of things, we end up with a combination of higher background noise and increased distortion.  But in the end, by designing our SDMs optimally, we do get those parameters down to the point where the overall performance is pretty darned good.

Actually, there is a way to get around that problem.  Let’s take an ordinary SDM whose output value is going to be either 1 or 0.  We can do some kind of “what if” calculation, and say “What if we chose a 1?” and calculate what the output value after that was going to be.  We can do the same thing for “What if we chose a 0?”.  In each case the SDM will chose either 1 or 0 for the subsequent output value.  What we are doing is, instead of selecting between two possible output values, 1 and 0, we are selecting between 4 possible output sequences, 10, 11, 00, and 01, which begin with either 0 or 1.  We get to choose which of the 4 gives us the best result, but this comes at the expense of a doubling of the amount of processing that we have to do.  Note that by choosing between those four possible values, we are only selecting the first bit, and not both bits of the sequence.  In other words, if we prefer 11 or 10, then all we are doing is selecting the single output value of 1, and if we prefer 01 or 00 all we are doing is selecting the single output value 0.  This process is called “Look-Ahead” for obvious reasons.

Look-Ahead can give seriously improved performance in both noise floor and distortion, but in order to achieve that, it turns out you need to be able to look a long way ahead, not just two or three bits.  In reality, 10 or 16 bits of look-ahead are required, and, at first sight, each additional bit of look-ahead doubles the amount of processing time required.  At that rate, 16 (or even 10) bits of look-ahead comes at a prohibitive processing cost.  However, like in most things mathematical, when smart people are motivated to look into it, solutions can often be found.  By analyzing how the mathematics of look-ahead works, you realize that the same calculations are being repeated in multiple branches of the look-ahead tree, and can find ways of only doing them once.  Additionally, some of the branches can be identified early on as being bad candidates for the final decision and can be pruned at an early stage.  Finally, it is possible to expand the basis upon which we decide how ‘good’ or ’bad’ a branch is, and thereby do a better job of eliminating the ‘bad’ ones early on.

Taken together, along with the relentless rate of progress of computer power, these “look-ahead” SDM architectures are on the verge of being implementable.  They may not have an immediate impact on consumer DACs, but they could make their presence felt in DSD studio equipment, where the lower noise floor and distortion may open the door to effective mixing and other rudimentary signal processing.

All stuff that Vermeer would never have thought of.  Or, for that matter, Tim Jenison.

I finally got my AppleTV replaced by Apple.  It’s not that it took them a long time – in fact they replaced it on the spot with no hassle at all – it’s more that it took me a long time to get round to hauling my ass to the Apple Store.  So now I have a brand new AppleTV 3, presumably without the known bugs that afflicted my original unit.  What we now need to find out is what difference that has made to my AirPlay network, given that the latter has performed flawlessly since I remove the AppleTV from it.

But that’s going to have to wait a short while, because there is something else I want to write about first.  You see, as I mentioned in a previous post, my AppleTV’s main role in life is to drive the TV set in my gym, to provide the boredom relief necessary to get me through my daily workout.  So this morning, I watched a documentary from Sony Classics called Tim’s Vermeer.  It was a profoundly interesting program, and I felt it necessary to write about it, and about the parallels I drew from it (which will be the subject of a separate post tomorrow).

Johannes (Jan) Vermeer was one of the Dutch Masters who painted in the latter part of the 17th Century.  His paintings, typical in general of the Dutch Golden Age, possess a quality we today refer to as ‘photo-realism’.  They possess an accuracy of perspective, and of illumination, that we today take for granted in photographs, but which was quite unknown in Vermeer’s time.

Tim Jenison is an American entrepreneur who built a successful career in software for the TV and video industries.  Although he is a graphic artist, he is not trained in any way as a painter.  Jenison, like many people before him, was deeply intrigued by how the Dutch Masters, and Vermeer in particular, were able to make the leap in perception which they did.

As an expert in the field of video, he came to appreciate that Vermeer’s paintings differed from many more modern artworks in a key aspect.  The way he saw it, the works of the the Dutch School looked more to him like video stills than photographs.  This would only come about if they were ‘copied’ from life, rather than created independently where you can continuously modify the result if it isn’t exactly what you want, even to the extent that what you want is no longer a strictly accurate representation.  Many experts in the field have postulated that the Dutch School arose due to the concurrent development of the “camera obscura”.  This would throw an image of a real-life scene onto a screen or wall in a darkened room, and the artist could paint from that.  Vermeer’s “video-still” brand of photo-realism could arise if he painted by ‘copying’ what he saw on a camera obscura image.

Books have been published on this topic (the so-called Hockney-Falco thesis, named for the British artist David Hockney and the American physicist Charles Falco), which made an impression on Jenison.  The interesting thing is that none of Vermeer’s works show any evidence of the sort of procedures an artist would presumably have had to follow if that were the case.  A camera obscura (latin for “Dark Room”) is a low-light environment, and not one at all conducive to painting a masterpiece.  Therefore, if an artist were to use one as the tool to throw an image directly onto a canvas that he would then paint over, it is likely that he would record the basic framework of the image in the camera obscura, and finish it off elsewhere.  However, X-ray analysis of Vermeer’s works show no evidence of any such structures beneath the final layers of paint.  His works appear to have been deposited in their final form directly upon the canvas, and with extraordinary precision in some critical aspects.

Intrigued by these findings, Jenison set about his own experiments, to see what would happen in practice if you tried to paint Vermeer-style art from camera obscura images, and from there to imagine how Vermeer might have responded to these challenges.  One of the obvious problems is that the image in a camera obscura is upside-down and back-to-front.  Although the latter is not too much of a hindrance, the human brain – and the artist’s eye – have a lot more trouble interpreting an inverted image.  Jenson realized that using a mirror would be the simplest way to correct for that problem and set about experimenting with one.

He immediately found an intriguing solution.  He placed a small canvas flat on a table which he positioned directly in front of an inverted photograph.  He then placed a small mirror directly above the canvas, at the same distance from the canvas as from the photograph.  Peering at the canvas from above, a viewer would see the canvas, except for the area where the mirror impinged, where instead he would see a reflection of a small portion of the photograph.  Both photograph and canvas would be in focus.  You could then use the setup as a tool to draw a replica of the photograph, a bit like dividing a picture into a grid of squares like they taught you in high school.

As a graphics designer, Jenison saw this configuration as a 17th century version of an editing window in which you could place the original and the copy side-by-side for comparison purposes.  In particular, it would enable very precise colour matching, which is otherwise rather more challenging than you might imagine, since the human eye/brain combination has very poor absolute colour memory.  Jenson then used this theory to attempt for himself to copy a simple B&W portrait photo using oils.  As a non-painter, this would be his first ever attempt at an oil painting.  You really need to see the film itself to appreciate what an incredible job he was able to do.

Essentially, what his technique does is to nibble away at the whole image, by adjusting his viewpoint so that, bit by bit, the entire image passes by the interface between the mirror and the canvas, allowing him to compare and replicate the exact tint of the applied paint at each point.

Armed with a primitive (but authentic) camera obscura device, his mirror, and his B&W portrait in oil, Jenison visited David Hockney in London, plus a couple of other authorities, to see if there was any interest in the notion that this might have been the technique that Vermeer himself had used.  The reception he received was quite encouraging.  Also, while in London, he was granted special dispensation to visit Buckingham Palace and spend a half hour looking at his personal favourite Vermeer painting, “The Music Lesson”.

He came away convinced of what the next step should be.  He would attempt to use the self same techniques to try and replicate Vermeer’s “The Music Lesson”.  Actually, he would not so much try to replicate the painting itself, rather he would replicate the method by which it was created in the first place.  Given that he was not in any way a painter, and Vermeer is a revered master, this would be a major challenge.

Jenison was quite thorough in his approach.  Like Vermeer, he chose to make his own paints by grinding his own pigments and mixing them with oils.  He made his own furniture when he could not buy authentic originals.  He cast and ground his own lenses to use in the camera obscura.  He recreated as exactly as possible the room in which the original painting was set, the clothes worn by the subjects, the decoration and the furnishings – which included a Viola da Gamba, on which he gave a rustic and rather baroque rendition of the iconic riff from “Smoke On The Water”.

I won’t elaborate on the outcome, save to say that it all comes to a fitting conclusion.  Along the way, a couple of extraordinary things emerge.  One of the first things Jenison observes when he begins his marathon paint job is the appearance of chromatic aberration, caused by the fact that he has obliged himself to use authentic glass and lens designs in his camera obscura.  If he is going to be true to his aim of objective authenticity, he must include the faint blue blurs which are visible at certain high contrast edges.  But, looking at high-magnification images of the original Vermeer, he is astonished to find that it, too, has rendered the same blue blur, in the same places.  There is no reason to believe that Vermeer understood chromatic aberration.

More dramatically, though, part way through the painting, Jenison discovers that his lens also shows some mild pincushioning, a fact that only becomes evident due to the unerring optical accuracy of his method.  Again, and quite astonishingly, the original Vermeer is shown to exhibit the very same pincushion distortion, in such a way as to suggest that not only did he follow Jenison’s method, but also the precise (and highly practical) order in which Jenison implemented it.  Unfortunately, the narrator did not address the question of whether or not this pincushioning had ever been detected by experts prior to Jenison’s work.  That would have been interesting to know.

I found the whole thing to be wonderfully entertaining and informative.  Since the program was produced by Penn and Teller – with Penn Jillet doubling as presenter and narrator, and Teller directing – one can comfortably eliminate the notion that the wool is being pulled over our eyes in the service of a good yarn.  Some limited follow-up research on my part shows that while Jenison’s theory does indeed receive a great deal of credence – seriously unusual in itself for the work of a rank amateur in the rarefied world of fine art – there is little to support it in terms of the historical record.  Vermeer is not known to have had any particular interest in optics, and his personal effects after his death were not found to include a camera obscura or anything similar.

Tomorrow I will explain what any of this has to do with audio.