Roger has been somewhat shunted unceremoniously to one side in the modern world. We seem to have forgotten why he was ever there in the first place, and the important role he used to play. Without Roger, our world today is a less friendly place, one in which misunderstandings are easy to come by. Personally, I miss him, but then again I suppose I am just another old fart.
In the early day of person-to-person radio communications, Roger played a critically important role. If you are flying an aeroplane, and you want to announce to the control tower that you’re commencing your takeoff roll, you want to be sure that the control tower is aware of that, otherwise all sorts of unpredictable outcomes could potentially result, some of them dire. That’s where Roger comes in. The control tower responds “Roger” and now you know your message has been received and, by extension, that the control tower knows you are rolling. It is part of what we today recognize as a handshaking protocol, something that ensures the effectiveness of a two-way communication. Handshaking is a tool to ensure that a message has been received, that it has been understood, and that both parties know either who is expected to speak next, or that they are agreed that the conversation is over.
When speaking to someone face-to-face, or over the telephone, there are implied cues to which we tend to adhere in order to provide this handshaking element. These can be turns of phrase, vocal inflections, gestures, and the like. They often vary among cultures. How we communicate with a person has important ramifications as to how the other person perceives us, and how we in turn perceive the other person. We may perceive that person to be brusque, friendly, rude, gregarious, or to have any of a number of attributes. If, as a person, your inter-personal communications cause others in the world to perceive you wrongly, it is well-understood that you could have problems in your life.
Generally, it is important in our day-to-day inter-personal communications that we understand how the subtext of our communication is being received. If you ask someone if they want to have a beer with you after work, there is world of difference between “No” and “Gee, I’m sorry, but my daughter has soccer practice”. Most of us, when we speak with someone face-to-face or on the telephone, understand the subtext, even as we recognize that the understanding itself is sometimes in error.
Roger’s absence first became a problem with the widespread introduction of e-mail into mostly business correspondence. If I send an e-mail inviting a colleague out for dinner when I’m in town next week, many people will find it acceptable to reply “No” in an e-mail when they really mean “Gee, I’m sorry, but I’m out of town that day”, even when they would never dream of responding with the terse “No” in a face-to-face situation. It is part of a complex issue, one on which I don’t propose to write a treatise, but a major contributory factor is that, for most of us, it takes far longer to compose an e-mail message that properly encapsulates the subtext with which we wish to endow our response, and often we just don’t feel we have that time. Personally, I find that excuse to be a lazy one, and if not, then disrespectful towards the recipient.
In today’s world, for many people the text message has replaced the e-mail, particularly for one-on-one conversations. Partly by their nature, and partly due to the hardware typically used to send them, text messages tend to be terse by default. Additionally, text message conversations tend to replace telephone conversations for many people. They want to multi-task. They will fit your text conversation in as they find time during the course of their day. And so will you. Consequently, the alternative of a phone call takes on something of the aura of an intrusion. Which is rather frustrating, since, in the grand scheme of things, a phone conversation is always many, many times more effective. But that too is another discussion.
This is where Roger comes in. The lingua franca of texting is the curt message. I don’t know about you, but I really feel the need to know that a message has been received and understood. If I send a text that says something to the effect of “I need to see your report by the end of the day”, I feel unhappy if I don’t get a response. It’s like if I said the same thing to that person in my office, and he just walked out without responding. It is very clear to me how I would interpret such an action. And shortly thereafter it would be equally clear to the other person. What is missing is Roger. All you need is a text response that says “Will do”. Or even “K”, if that’s your thing. Sadly, and frustratingly, I am finding that Roger is very much the exception rather than the rule these days.
I said earlier that inter-personal “handshaking” protocols are to a large degree cultural, and maybe that’s what’s going on here. A texting culture is arising – or has arisen – in which subtext is no longer being conveyed at all – even emoticons, as best as I can tell, are en route to being passé. If so, that is a counter-productive development. I do converse with a few people routinely whose preferred mode of communication is the test message, and have done for some time. I still find it a frustrating medium, and mainly because I dearly miss Roger, which I still give, but rarely receive.
It has been several months now since I concluded my review of the PS Audio DirectStream DAC, and a pretty positive review it was. Since that time the unit has continued to very gradually break in, and there have also been a couple of firmware updates (all of which are available on PS Audio’s web site at no charge), each an improvement, but not sufficient to justify adding substantially to the gist of my review. But recently there has been a major firmware update – version 1.2.1 is its designation – which is a sufficient game-changer that it warrants a whole update of its own.
So why should something as perfunctory as firmware change the sound of a DAC? Normally when we think of firmware updates we think of functionality rather than performance. And indeed there are functionality issues which are addressed here – the DirectStream now fully supports 24/352.8 PCM, which it did not do with the original firmware. But in a DAC in particular, a large part of what it does performance-wise lies in the processing of the incoming data in the digital domain, and those processes are often under the control of the firmware. Particularly in the DirectStream, where all that processing happens on in-house PS Audio firmware rather than within the proprietary workings of a third-party chipset. What goes on under the aegis of its firmware is to a large degree the heart and soul of what the DirectStream is all about.
I have communicated at length with Ted Smith, designer of the DirectStream, about the nature and effect of the changes he has made. I’m not sure how open those discussion were intended to be, and so I will not share them in detail with you, but there are two areas in which his attention has been primarily focussed. The first is on the digital filters, and how their optimal implementation is found to affect jitter, something which initially surprised me. The second is on the Delta-Sigma Modulators which generate the single-bit output stage, always an area ripe for improvement, in which Ted has reined in the attack dogs which stand guard to protect against the onslaught of instabilities. Together, the effect of these significant updates has been transformative, and that is not a word I use lightly.
The simple description of the sound of the 1.2.1 firmware update is that it has opened up the sound. Everything has more space and air around it. Sonic textures have acquired a more tactile quality. The music just communicates more freely. It would be easy to sit back and characterize the sound as more “Analog-” or “Tube-” like. These are words the audiophile community likes to use as currency for sound that is quite simply easy to listen to. It is interesting that we audiophiles admire and value attributes such as sonic transparency, detailed resolving power, and dynamic response, and yet how often is it that when we are able to bring them together the result is painfully unlistenable? It is these Yin and Yang elements that are foremost in my mind as I listen to the 1.2.1 version of the DirectStream.
So, without further ado, what am I listening to, and how is it sounding?
First up is “Bending New Corners” by the French jazz trumpeter Erik Truffaz. This is a curious infusion of early-‘70s Miles Davis, ambient groove jazz, and trip-hop, which brings to mind the sort of music that might have deafened you in the über-trendy restaurant scene of the 1990’s. I first heard it on LP at the Montreal high-end dealer Coup de Foudre, and today I’m playing the CD version. The mix is a relatively simple one involving trumpet, bass, keyboards and drums, plus the occasional vocal stylings of a rapper called ‘Nya’. The music is set in an atmospheric ambient, and is quite simple in its sonic palette, but nevertheless I have always had trouble separating out the individual instruments. I was keen to know what the additional resolving power of the 1.2.1 DS would make of it.
What the additional clarity brought was the realization that I have been hearing the limits of this recording all along. The trumpet has a very rich spectrum of harmonics which overlay most of the audio spectrum, and when it plays as a prominent solo instrument those harmonics can intermodulate with the sounds of many other instruments, making it difficult to hear through the trumpet and follow the rest of the mix. If the intermodulation is baked into the recording, then no degree of fidelity in the playback chain is going to solve that problem. This is what I am plainly hearing with the 1.2.1 DS. This recording, far from being a clean and atmospheric gem waiting for an extraordinary DAC to liberate its charms, is a bit of a digital creation. The extraordinary DAC instead reveals its ambience as a digital artifact. The lead trumpet and vocals can be heard to have a processed presence about them.
Once you have heard something, you can never “un-hear” it again. It’s a bit like skiing, in that once you’ve mastered it, it becomes impossible to ski like you did when you were still learning. At best, all you’ll manage is a caricature of a person skiing like a novice. I can now go back to the CD of “Bending New Corners” on a lesser system and will recognize its flaws for what they are, even though previously I would have interpreted what I was hearing differently.
My experience with Bending New Corners was to be repeated many times. As I type, I am listening to Ravel’s Bolero with the Minnesota Orchestra conducted by Eiji Oue on Reference Recordings, ripped from CD. It begins with a pianissimo snare drum some 20 feet behind the speakers and slightly to the right of center. This recording has always been one of which I have thought highly. The solo pianissimo snare is a good test for system imaging. However, I now hear the snare as living in a slightly smeared space. I perceive its sonic texture differently – more plausibly accurate if you will (a layer of sonic mush hovering around the instrument itself has evaporated away like the early mist on a spring morning) – but I somehow cannot place the image more accurately than a few feet. I surmise that, because my brain is more confident that it is hearing the sound of a pianissimo snare drum, it therefore also expects to hear that sound more accurately localized in space. But it is unable to do that. As a consequence, although I never previously thought that the stereo image was wanting, I now appreciate that in fact it is, and I wonder how a higher-resolution version of this recording would compare.
Here is a song my wife likes. It is “Hollow Talk” from the CD “This is for the White in your Eyes” by the Danish band Choir of Young Believers. My wife had me track it down because it is the theme tune on a Danish/Swedish TV show we have been watching on Netflix called The Bridge (Bron/Broen). It is another example of how the DS 1.2.1 can render a studio’s clumsy machinations clearly manifest. The echo applied to the vocal adds atmospherics but is just unnatural. As the track proceeds, the production gets layered on and layered on – and then layered on some more. The effect is all very nice when heard on TV, but on my reference system driven by the DS 1.2.1 it just calls out for a lighter touch. For example, at the beginning I heard a faint sound off to the left like someone getting into or out of a car and closing the door. I don’t see why they wanted to include that – I can’t imagine it is particularly audible unless you have a highly resolving system such as a DS 1.2.1, one which makes clear the dog’s breakfast nature of the recording.
Next up is another old favourite of mine, “Unorthodox Behaviour” by 1970’s fusion combo Brand X. I saw the band live at Ronnie Scott’s club in London back in 1975 (or thereabouts) and bought the album on LP as soon as it came out. Today, I’m playing a 24/96 needle-drop. I just love the opening track, Nuclear Burn. Percy Jones’ bass lick is original and memorable, and extremely demanding of technique. DirectStream 1.2.1 lets me hear the bass line more clearly than I have ever heard it before. I had always thought it to have a slightly muddy texture – not surprising, given that playing it would tie most people’s fingers into inextricable knots – but now I hear just how extraordinarily skilled Jones’ bass chops really were. And below it, Phil Collins’ kick drum has acquired real weight. Not that it sounds any louder, or deeper. It is more like the pedal mechanism has had an extra 5lb of lead affixed to it.
Now to a lonely corner of your local music store, where the Jazz, Folk, and Country aisles peter out. This is where you’ll find Bill Frisell’s 2000 CD “Ghost Town”, a finely recorded ensemble of mostly acoustic guitar and banjo music with Frisell playing all the instruments. Despite the album’s soulful and contemplative mood, due at least in part to the sparse arrangements and absence of a drum track, I keep expecting it to break out suddenly into ‘Duelling Banjos’. The track list comprises mostly Frisell original compositions together with a handful of well-chosen covers. Apart from enjoying the music, the idea here is to play Spot The Guitar. On a rudimentary system this involves telling which are the guitars and which the banjos. As the system gets better, you start to be able to tell how many different models of each instrument are being played. With the DS 1.2.1 I suspect you could go further and identify the brands (Klein, Anderson, Martin, etc.). Me, I’m not a guitar head, and can’t do that (although, back in the day, I used to be able to reliably tell a Strat from a Les Paul, even on the most rudimentary systems), but I do hear the different tonalities and sonorities very clearly.
Gil Scott-Heron is credited in some circles as being the father of rap. He was a soulful yet extremely cerebral poet-musician with a strong sense of a social message. His 1994 CD “Spirits” was a bit of a swan song, and contains a track “Work for Peace” which is a political rant against the ‘military and the monetary‘, who ‘get together whenever it’s necessary‘. I kinda like it – it is, I imagine, great doper music … yeah, man. But the mostly spoken voice is very soulfully and plausibly captured. You can imagine the man himself, in the room with you. I would just love to hear the original master tape transferred to DSD.
“I Remember Miles” is a 1998 CD from Shirley Horn. It’s a terrific recording, and won the Grammy for Best Jazz Vocal Performance. But really, it is an all-round wonderful album. And the standout track is an absolute classic 10-minute workout of Gershwin’s “My Man’s Gone Now” from Porgy and Bess. It begins with Ron Carter’s stunning, ripely textured, ostinato-like bass riff which underpins the track. It has always sounded to me like two basses – one electric and one acoustic – but with the latest DS 1.2.1 the electric bass tones now sound more and more like an expertly played and finely recorded acoustic bass, and in addition I’m beginning to think there’s just the one bass – perhaps even double-tracked. I’d love to know what you think. Aside from the tasty bass, the rest of the recording is revealed to have a smooth but slightly congested, slightly coloured sound, a bit like what I hear when I listen to SETs played through horn speakers (I know, I know, heresy. Kill me now.). The immediacy and sheer presence of a fine DSD recording is just not there. Unfortunately, this has not been released on SACD either. Perhaps a DSD remaster will finally put the bass conundrum to bed?
Which brings me to the nub of this review. Finally, the DirectStream is delivering on its huge promise as a DSD reference. With the 1.2.1 firmware, it is opening up a clear gap between its performance with DSD and PCM source material, along the exact same lines as my previous experience with the Light Harmonic Da Vinci Dual DAC. The DSD playback just adds that extra edge of organic reality to the sound. It just sounds that little bit more like the actual performer in the room with you. Sure, CD sounds great on it – probably as good as I’ve ever heard it sound – but the DS 1.2.1 consistently shows CD at its limits. Great sound requires more than CD can deliver across the board, and in my view the DS 1.2.1 – through its excellent performance – makes this about as clear as it’s ever going to be.
In Part II of my review I mentioned the CD of Acoustic Live by Nils Lofgren. I recently came across a SACD of music from the TV series “The Sopranos”, and it contains “Black Books” from the Lofgren album. The CD is a pretty special recording, but the DSD ripped from the SACD just blows it clean out of the water, if you can imagine such a thing. The vocal has incredible in-the-room-in-front-of-you presence. All of the acoustics, which were already pretty open, really open up. The pair of tom-toms I mentioned take on individual tonality, texture, and weight. And the guitar work, which I previously characterized as being ‘aggressively picked’ comes across with a much more natural and plausible sound. You just cannot go back to the CD and hear it the same way. DAMN! Someone needs to release this whole album on SACD, and preferably as a DSD download.
Another great SACD is MFSL’s remastering of Stevie Ray Vaughan’s “Couldn’t Stand The Weather”, with its perennial audiophile favourite “Tin Pan Alley”. Beginning with a solid kick drum thwack, it launches into a cool, laid-back, 12-bar blues. Vaughan’s guitar has just the right combination of restraint and blistering finger work, and his vocal is very present and stable, just to the left of centre. The rhythm section lays down a fine metronomic beat, playing the appropriate foundational role upon which SRV builds his performance. By contrast, in their uncomplicated take on Hendrix’s “Voodoo Chile”, the drums are given full rein to pound out a tight and impactful rhythm, and SRV gives his guitar hero chops a good airing. If you’re unfamiliar with SRV and want to know what the man was about, this would be the place to start. It is a fantastic recording, and one that has been expertly transferred to SACD.
The Japanese Universal Music Group has remastered and released many classic albums in their SHM-SACD series, all of which are both hard to come by outside of Japan, and ruinously expensive. Their work on Dire Straits’ “Brothers in Arms” is interesting. To the best of my knowledge the original recording was on 20-bit 44.1kHz digital tape (but there are people around that know more than me about those things). Anyway, the fact is that there is no obvious reason why a remastered SACD should sound significantly better than the original CD, unless, of course, the latter was not well mastered. However, the conventional wisdom is that Mark Knopfler was particularly anal about the recording and mastering quality, and so maybe that argument doesn’t hold water. Additionally, the Universal SHM-SACD can be compared with a contemporary remastering by MFSL, and both can be compared to the original CD.
Right away, both SACDs come across as superior to the CD in all the important ways. The title track, Brothers in Arms, is one of my all-time go-to tracks. On both remasterings, with the DS 1.2.1 the vocal has that signature SACD presence, and Knofler’s guitar work sounds more organic, more like a real instrument in the room with you – just like with the Nils Lofgren. I puzzled over how and why two SACD remasters from impeccable digital sources could sound different. But they do, and maybe someone could enlighten me about that. The two remasters sound almost stereotypical (there’s gotta be a pun in there somewhere) of how we think of Japanese and American musical tastes. The Japanese SHM-SACD is massively detailed, but with slightly flat tonal and spatial perspectives compared to the American MFSL. The latter’s tonal bloom fills the acoustic space in a more immediately appealing manner, but at the apparent cost of some of that delicious detail. If one is right, then the other must be wrong, so they say. You pays your money, and you takes your choice. But the bottom-line is that with a DAC of the resolving power of the DS 1.2.1 considerations such as these are going to weigh more heavily than might otherwise be the case.
So there you have it. The 1.2.1 firmware update will transform your DirectStream from a great product into a game-changing product. I concluded my last review by comparing the DirectStream, with its original firmware, to my all-time reference, the Light Harmonic Da Vinci Dual DAC. I felt that, based on my aural memory, since I no longer have the Da Vinci to hand, that the DirectStream was not quite up to the latter’s lofty standards. With the 1.2.1 firmware I am no longer so sure about that. I would need to have both DACs side-by-side in order to be certain. But this time around my aural memory tells me that the DirectStream in its 1.2.1 incarnation could very well give the Da Vinci a good run for its money. And in some areas, such as its bass performance, I even wonder if the DirectStream might not come out on top. Let’s bear in mind the price difference – $6k vs $31k. That’s an extraordinary achievement.
Now that Tim’s Vermeer is out of the way, here, finally, is my promised AirPlay update. You will recall that, following my system-wide upgrade to Yosemite and iTunes 12.x, when I set about evaluating BitPerfect’s AirPlay behaviour under that configuration it started out looking very bleak. Nothing seemed to want to work, and there seemed to be a number of different and quite independent failure modes. At the same time I was running short of patience with my AppleTV for non-audio reasons, and eventually discovered that it was one of a number of early ATV3 units which was eligible for free replacement under an Apple program. With the AppleTV removed from my network, AirPlay suddenly started working very well, very predictably, and very stably. Now that I have my replacement ATV3 from Apple, the question is what would happen when I re-installed it?
The answer is good news. It has had no adverse impact whatsoever on my AirPlay setup. So I set about devising a torture test, one I have never tried before. I have three Macs in my test network, all running Yosemite and iTunes 12 – the first is a 2013 Mac Mini, the second a 2014 RMBP, and the third a 2009 MBP. I also have three AirPlay devices in my network – the first is my Classe CP-800 (which has an ethernet-connected AirPlay interface built in) which is connected to my main reference system, the second is an Airport Express connected to a set of computer speakers, and the third is my new AppleTV3, connected to a TV set.
All three Macs are running BitPerfect 2.0.1 straight from the App Store, and all three are playing through AirPlay. The Mac Mini is playing Sade’s “Diamond Life” to the CP-800, the MBP is playing Laura Mvula’s “Live with Metropole Orkest” to the AirPort Express, and the RMBP is playing AC/DC’s “Back in Black” to the AppleTV. All three are playing music quite happily, simultaneously, and the music choices are ones which my brain can at least separate out from the cacophony. There have been no dropouts or other problems as far as I can tell (it is not easy listening to three systems simultaneously!). So what had the potential to be a metaphorical headache is now instead a physical one, but for the time being I am not too unhappy about it
Next, I decided to stress the system to breaking point. Although BitPerfect “hogs” its audio output device, thereby preventing other Apps – including OS X itself – from accessing it, this is not so simple with AirPlay. BitPerfect can only hog AirPlay’s standard audio interface, but OS X does not control AirPlay through the standard audio interface. So, even while BitPerfect is hogging it, you can still access the AirPlay subsystem via OS X’s Audio Midi setup. So what would happen if I messed with the AirPlay settings in Audio Midi Setup while BitPerfect was busy playing through it? And suppose I did that simultaneously with all three systems, while each one was busy playing? Ugh.
So, on my Mac Mini I changed the AirPlay device in Audio Midi Setup from the CP-800 to the AppleTV. Nothing happened. BitPerfect continued playing to the CP-800, and the RMBP continued playing to the AppleTV. So then I changed the RMBP’s AirPlay device from AppleTV to Airport Express, and the MBP’s from AirPort Express to CP-800. Now, each of my Macs has its AirPlay device set to a different one from the one which BitPerfect is playing through, but BitPerfect’s playback has continued unchanged. It is as though BitPerfect’s “hog” on the AirPlay device seems to have a lot more teeth to it that was previously the case under Mavericks and Mountain Lion. This is a good thing, although the net result would be seriously confusing to someone who came in off the street right now and set about inspecting my setup.
Finally, the playlist playing on each of the systems has moved on to the next album in the queue. In each case I have selected a 24/176.4 album so that (i) BitPerfect is doing some extra work to downsample the incoming signal, and (ii) the WiFi network is now being more seriously challenged. The Wi-Fi network now has to stream 24/176.4 music into two of the three computers (the audio files live on a NAS), and then stream a 16/44.1 AirPlay stream out of the computers and then into the AirPlay devices. That’s two 24/176.4 streams and four 16/44.1 streams simultaneously. The third computer, and the third AirPlay device, are both connected via ethernet. Everything continues to play just fine. Credit here, to be fair, must go to my trusty Cisco E4200 router.
And still my headache continues. Eric Clapton’s “Slowhand”, Deep Purple’s “Machine Head”, and Arkady Volodos reciting Chopin, now permeate the soundscape. Excuse me while I take a couple of Tylenol….
Yesterday I wrote about Tim Jenison and his cool research on the topic of Vermeer and the photo-realism of the Dutch School, captured in a wonderful documentary called “Tim’s Vermeer” from Sony Classics. I mentioned the extraordinary story of how Jenison constructed a plausible apparatus by which, he posited, Vermeer may have actually produced his revolutionary paintings. Jenison, a graphics designer and non-artist, went further and used it to produce his own version of Vermeer’s “The Music Lesson”, something which, had he done it in 1650, might conceivably have elevated the name Jenison into the pantheon of the greats.
Jenison’s technique in effect had him work his way across the canvas, comparing each fragment of the painting with the corresponding fragment of an image of his subject produced by a camera obscura. In essence, you consider a spot on your canvas, and compare what the obscura image suggests should go there with what is already there. If you perceive a discrepancy, you can easily correct it. Or modify it later if, maybe as a result of what you put in an adjacent area, you come up with something better – in contrast to, for example, how an ink-jet printer might approach the same job. “The Music Lesson” is about two feet square, and it took Jenison something like 130 days to complete the painting. The unbreaking concentration required was enormous, and in turn nearly broke him.
While watching all this I was immediately struck by an interesting comparison with the Sigma Delta Modulator (SDM) used to produce a DSD data stream. The SDM takes some input data, which may be an analog signal or a digital data stream, and sets about producing an output data stream. Each time it needs to create an output value it sets about comparing two things – the input value, plus the input values that preceded it, and the previous output values. It uses those to calculate what the new output value should be.
This is like Jenison’s apparatus. He goes to a place on the canvas, looks what’s there, and compares that to what’s in the equivalent place on the original image. He then uses his judgement to decide what he needs to paint in that particular place. In a practical sense, it’s not that he seeks something right or wrong in absolute terms about the smudge of paint that needs to be applied, more a question of what looks best given the available comparables. “Using his judgement” is a convenient phrase which undervalues the colossal amount of visual processing power that the brain is able to bring to bear on the task.
This illuminates one of the limitations of a true SDM in a DSD application. DSD requires that the output of the SDM has to be either 1 or 0. If, on balance, the SDM figures out that the best output value is actually 0.5, DSD doesn’t have that as an option. It has to choose either 1 or 0. The SDM architecture only allows us to look historically at both input and output data and use that to make our choice. If both 1 and 0 are equally wrong, then it does’t matter which one we choose. We just hope that the SDM can take the error fully into account when it comes to choosing the next output value, and the ones after that. In fact the situation is always like that. The SDM, in reality, always figures out a best output value somewhere between 0 and 1, and never comes up with an output value which is either exactly 1 or exactly 0. And, unlike Jenison, the SDM doesn’t get to go back and do a make-over once it’s made its choice. In that regard, the SDM is a bit like the ink-jet printer analogy.
Given that the ideal output value is never 1 or 0, and that we have to pick one or the other and hope for the best, what do we do if it turns out that we’d have been better off choosing the other one? The answer is that, in the grand scheme of things, we end up with a combination of higher background noise and increased distortion. But in the end, by designing our SDMs optimally, we do get those parameters down to the point where the overall performance is pretty darned good.
Actually, there is a way to get around that problem. Let’s take an ordinary SDM whose output value is going to be either 1 or 0. We can do some kind of “what if” calculation, and say “What if we chose a 1?” and calculate what the output value after that was going to be. We can do the same thing for “What if we chose a 0?”. In each case the SDM will chose either 1 or 0 for the subsequent output value. What we are doing is, instead of selecting between two possible output values, 1 and 0, we are selecting between 4 possible output sequences, 10, 11, 00, and 01, which begin with either 0 or 1. We get to choose which of the 4 gives us the best result, but this comes at the expense of a doubling of the amount of processing that we have to do. Note that by choosing between those four possible values, we are only selecting the first bit, and not both bits of the sequence. In other words, if we prefer 11 or 10, then all we are doing is selecting the single output value of 1, and if we prefer 01 or 00 all we are doing is selecting the single output value 0. This process is called “Look-Ahead” for obvious reasons.
Look-Ahead can give seriously improved performance in both noise floor and distortion, but in order to achieve that, it turns out you need to be able to look a long way ahead, not just two or three bits. In reality, 10 or 16 bits of look-ahead are required, and, at first sight, each additional bit of look-ahead doubles the amount of processing time required. At that rate, 16 (or even 10) bits of look-ahead comes at a prohibitive processing cost. However, like in most things mathematical, when smart people are motivated to look into it, solutions can often be found. By analyzing how the mathematics of look-ahead works, you realize that the same calculations are being repeated in multiple branches of the look-ahead tree, and can find ways of only doing them once. Additionally, some of the branches can be identified early on as being bad candidates for the final decision and can be pruned at an early stage. Finally, it is possible to expand the basis upon which we decide how ‘good’ or ’bad’ a branch is, and thereby do a better job of eliminating the ‘bad’ ones early on.
Taken together, along with the relentless rate of progress of computer power, these “look-ahead” SDM architectures are on the verge of being implementable. They may not have an immediate impact on consumer DACs, but they could make their presence felt in DSD studio equipment, where the lower noise floor and distortion may open the door to effective mixing and other rudimentary signal processing.
All stuff that Vermeer would never have thought of. Or, for that matter, Tim Jenison.
I finally got my AppleTV replaced by Apple. It’s not that it took them a long time – in fact they replaced it on the spot with no hassle at all – it’s more that it took me a long time to get round to hauling my ass to the Apple Store. So now I have a brand new AppleTV 3, presumably without the known bugs that afflicted my original unit. What we now need to find out is what difference that has made to my AirPlay network, given that the latter has performed flawlessly since I remove the AppleTV from it.
But that’s going to have to wait a short while, because there is something else I want to write about first. You see, as I mentioned in a previous post, my AppleTV’s main role in life is to drive the TV set in my gym, to provide the boredom relief necessary to get me through my daily workout. So this morning, I watched a documentary from Sony Classics called Tim’s Vermeer. It was a profoundly interesting program, and I felt it necessary to write about it, and about the parallels I drew from it (which will be the subject of a separate post tomorrow).
Johannes (Jan) Vermeer was one of the Dutch Masters who painted in the latter part of the 17th Century. His paintings, typical in general of the Dutch Golden Age, possess a quality we today refer to as ‘photo-realism’. They possess an accuracy of perspective, and of illumination, that we today take for granted in photographs, but which was quite unknown in Vermeer’s time.
Tim Jenison is an American entrepreneur who built a successful career in software for the TV and video industries. Although he is a graphic artist, he is not trained in any way as a painter. Jenison, like many people before him, was deeply intrigued by how the Dutch Masters, and Vermeer in particular, were able to make the leap in perception which they did.
As an expert in the field of video, he came to appreciate that Vermeer’s paintings differed from many more modern artworks in a key aspect. The way he saw it, the works of the the Dutch School looked more to him like video stills than photographs. This would only come about if they were ‘copied’ from life, rather than created independently where you can continuously modify the result if it isn’t exactly what you want, even to the extent that what you want is no longer a strictly accurate representation. Many experts in the field have postulated that the Dutch School arose due to the concurrent development of the “camera obscura”. This would throw an image of a real-life scene onto a screen or wall in a darkened room, and the artist could paint from that. Vermeer’s “video-still” brand of photo-realism could arise if he painted by ‘copying’ what he saw on a camera obscura image.
Books have been published on this topic (the so-called Hockney-Falco thesis, named for the British artist David Hockney and the American physicist Charles Falco), which made an impression on Jenison. The interesting thing is that none of Vermeer’s works show any evidence of the sort of procedures an artist would presumably have had to follow if that were the case. A camera obscura (latin for “Dark Room”) is a low-light environment, and not one at all conducive to painting a masterpiece. Therefore, if an artist were to use one as the tool to throw an image directly onto a canvas that he would then paint over, it is likely that he would record the basic framework of the image in the camera obscura, and finish it off elsewhere. However, X-ray analysis of Vermeer’s works show no evidence of any such structures beneath the final layers of paint. His works appear to have been deposited in their final form directly upon the canvas, and with extraordinary precision in some critical aspects.
Intrigued by these findings, Jenison set about his own experiments, to see what would happen in practice if you tried to paint Vermeer-style art from camera obscura images, and from there to imagine how Vermeer might have responded to these challenges. One of the obvious problems is that the image in a camera obscura is upside-down and back-to-front. Although the latter is not too much of a hindrance, the human brain – and the artist’s eye – have a lot more trouble interpreting an inverted image. Jenson realized that using a mirror would be the simplest way to correct for that problem and set about experimenting with one.
He immediately found an intriguing solution. He placed a small canvas flat on a table which he positioned directly in front of an inverted photograph. He then placed a small mirror directly above the canvas, at the same distance from the canvas as from the photograph. Peering at the canvas from above, a viewer would see the canvas, except for the area where the mirror impinged, where instead he would see a reflection of a small portion of the photograph. Both photograph and canvas would be in focus. You could then use the setup as a tool to draw a replica of the photograph, a bit like dividing a picture into a grid of squares like they taught you in high school.
As a graphics designer, Jenison saw this configuration as a 17th century version of an editing window in which you could place the original and the copy side-by-side for comparison purposes. In particular, it would enable very precise colour matching, which is otherwise rather more challenging than you might imagine, since the human eye/brain combination has very poor absolute colour memory. Jenson then used this theory to attempt for himself to copy a simple B&W portrait photo using oils. As a non-painter, this would be his first ever attempt at an oil painting. You really need to see the film itself to appreciate what an incredible job he was able to do.
Essentially, what his technique does is to nibble away at the whole image, by adjusting his viewpoint so that, bit by bit, the entire image passes by the interface between the mirror and the canvas, allowing him to compare and replicate the exact tint of the applied paint at each point.
Armed with a primitive (but authentic) camera obscura device, his mirror, and his B&W portrait in oil, Jenison visited David Hockney in London, plus a couple of other authorities, to see if there was any interest in the notion that this might have been the technique that Vermeer himself had used. The reception he received was quite encouraging. Also, while in London, he was granted special dispensation to visit Buckingham Palace and spend a half hour looking at his personal favourite Vermeer painting, “The Music Lesson”.
He came away convinced of what the next step should be. He would attempt to use the self same techniques to try and replicate Vermeer’s “The Music Lesson”. Actually, he would not so much try to replicate the painting itself, rather he would replicate the method by which it was created in the first place. Given that he was not in any way a painter, and Vermeer is a revered master, this would be a major challenge.
Jenison was quite thorough in his approach. Like Vermeer, he chose to make his own paints by grinding his own pigments and mixing them with oils. He made his own furniture when he could not buy authentic originals. He cast and ground his own lenses to use in the camera obscura. He recreated as exactly as possible the room in which the original painting was set, the clothes worn by the subjects, the decoration and the furnishings – which included a Viola da Gamba, on which he gave a rustic and rather baroque rendition of the iconic riff from “Smoke On The Water”.
I won’t elaborate on the outcome, save to say that it all comes to a fitting conclusion. Along the way, a couple of extraordinary things emerge. One of the first things Jenison observes when he begins his marathon paint job is the appearance of chromatic aberration, caused by the fact that he has obliged himself to use authentic glass and lens designs in his camera obscura. If he is going to be true to his aim of objective authenticity, he must include the faint blue blurs which are visible at certain high contrast edges. But, looking at high-magnification images of the original Vermeer, he is astonished to find that it, too, has rendered the same blue blur, in the same places. There is no reason to believe that Vermeer understood chromatic aberration.
More dramatically, though, part way through the painting, Jenison discovers that his lens also shows some mild pincushioning, a fact that only becomes evident due to the unerring optical accuracy of his method. Again, and quite astonishingly, the original Vermeer is shown to exhibit the very same pincushion distortion, in such a way as to suggest that not only did he follow Jenison’s method, but also the precise (and highly practical) order in which Jenison implemented it. Unfortunately, the narrator did not address the question of whether or not this pincushioning had ever been detected by experts prior to Jenison’s work. That would have been interesting to know.
I found the whole thing to be wonderfully entertaining and informative. Since the program was produced by Penn and Teller – with Penn Jillet doubling as presenter and narrator, and Teller directing – one can comfortably eliminate the notion that the wool is being pulled over our eyes in the service of a good yarn. Some limited follow-up research on my part shows that while Jenison’s theory does indeed receive a great deal of credence – seriously unusual in itself for the work of a rank amateur in the rarefied world of fine art – there is little to support it in terms of the historical record. Vermeer is not known to have had any particular interest in optics, and his personal effects after his death were not found to include a camera obscura or anything similar.
It has been a week now since I put my AppleTV in its box and took it back to the Apple Store. Unfortunately, I was told I needed an appointment to receive an audience with a “Genius” in order to get it seen to, and nobody was available. Since this involves a 45-minute drive through the West Island’s lethal road construction, I haven’t been back yet.
The upside is that I have had a week without an AppleTV in my system, and during that week AirPlay playback has been flawless, provided I followed the (revised for Yosemite) procedure I described last week. That’s three systems – a 2014 RMPB, a 2013 bare-bones Mac Mini, and a 2009 MBP. All running Yosemite, all working just fine, first time, every time, with AirPlay.
I thought that was worth reporting.
I now have my new AppleTV from the Apple Store. To find out what happened when I installed it in the system, read on here.
I have an AppleTV 3. It is an incredibly buggy device. As an audio device it has been a source of frustration for me since day one, to the point where I now no longer use it – ever – as part of my BitPerfect test regimen. It is relegated to use in my Gym, where I watch YouTube or Netflix with it while working out.
The AppleTV has an annoying habit of dropping its WiFi connecting periodically. Actually, it doesn’t so much drop its connection – it is more like its entire WiFi system shuts off. This happens after between a few minutes and a few hours of use, and has persisted across several firmware updates. The solution is to re-boot it. Sometimes it takes three or four re-boots. Rarely do I get through a solid hour without it failing.
Today it appears to have given up for good. I can’t get it to come back up at all. I have just found out that there is an Apple recall program in place and that my AppleTV is one of the affected units, so it is now boxed up and ready to go back to Apple. Lets see what happens.
So, while dealing with that, I got round to thinking a little. If you have read my recent posts on AirPlay, you will have noted that I spent a few days exhaustively testing AirPlay with BitPerfect under Yosemite and iTunes 12.0.1 with mixed results. I was using both my AirPort Express and the AirPlay receiver in my Classe CP-800 as the target AirPlay device. It may not have come across in my post, but my test experience seemed to go through two phases. The first was an initial three-hour phase during which nothing seemed to work at all. This was followed by a lengthy period during which AirPlay seemed to function with at least some semblance of predictability, as reported in my post, a situation which still persists this morning.
Here is what is going through my mind. Is it possible that when my AppleTV was active on the network I was having uncontrollable AirPlay problems? And that as soon as its WiFi transceiver ‘died’ (causing it to drop off the network) things started to play more predictably? As I write this, it occurs to me that whenever the AppleTV is active on my network, my RMBP seems to want to select it as its ‘default’ AirPlay device whenever it can, even though I never want to use it in that role and therefore never – ever – select it. Hmmmm….
I have been working hard on AirPlay to try to understand what it takes to get BitPerfect to work smoothly with it under the combination of Yosemite and iTunes 12.0.1. Unfortunately I don’t have a definitive answer for you, but I am at least starting to get a handle on its behaviour. I thought you might be interested to read some of this.
The problem is, it either works or it doesn’t, and I can’t figure out why. There are two main modes of “doesn’t work”. One is where BitPerfect’s menu bar icon stays black. This one happens rarely and generally only at the first attempt. It means that BitPerfect cannot access the AirPlay Device. The other is where the icon starts green, goes briefly black, and then stays green but with no music audible. This means that BitPerfect is streaming music to the AirPlay Device, which as far as BitPerfect is concerned is responding in the way it normally would. I have been wrestling with every combination of the various settings and sequences that might impact AirPlay behaviour but despite some successes, nothing has proven to be the magic bullet.
My first potential “Aha!” moment was when I got to the point where iTunes would throw up a message to the effect that “I can’t find the Airport Express” and offers me two options, “Cancel” or “Continue using the Computer Speaker”. The secret seems to be to select “Continue using the Computer Speaker”. “Cancel” is the wrong choice. I spent some time trying to determine what would cause this message to appear, but after a while I just stopped seeing it, and I haven’t actually seen it now since early yesterday. So that remains a puzzle.
The next interesting observation is a significant deviation from the setup that we have been recommending since Mountain Lion and Mavericks. iTunes has its own little AirPlay icon (next to its volume control) where you can select between the various AirPlay devices and “Computer”. It used to be that it was necessary to select the desired AirPlay device, but now I am finding that AirPlay never works unless “Computer” is selected, and not the other way around.
Yesterday, using my Mac Mini, it appeared that the required solution was to select AirPlay as the default system output device using Audio Midi Setup, then launch BitPerfect, and have BitPerfect launch iTunes (whether automatically or manually), then select “Computer” as the output device from the iTunes AirPlay control. But when I came to confirm my findings this morning, I found that it didn’t seem to matter whether or not I set the default system output device to AirPlay or to something else. All that matters is that I set the iTunes AirPlay control to “Computer”. You must select the desired AirPlay device (if you have more than one) in Audio Midi Setup. I even experimented with connecting the Mac Mini to the network by Ethernet (its normal configuration) or by WiFi. It didn’t make any difference.
While all this was happening, on my RMBP (which had Yosemite and iTunes 12.0.1 installed) it seemed that AirPlay would always work first time. I have a second, older MBP and so I installed Yosemite and iTunes 12.0.1 on that machine also. This morning I have added that to the mix. It seems that both MBPs have no problems at all getting BitPerfect and AirPlay to work together, provided I set the iTunes AirPlay control to “Computer”. For the most part the Mac Mini also works too. However, it took three or four attempts, restarting BitPerfect and iTunes each time in between, before it started working consistently. With each of these Macs, once AirPlay starts working, it seems to stay working until you stop playback for a while, or quit iTunes/BitPerfect.
So there you have a summary of a couple of days of intensive AirPlay experimentation. Set the iTunes AirPlay control to “Computer” and it will either work or it won’t. If it doesn’t, then quit BitPerfect and iTunes and start again. Rinse and repeat as necessary. You may be lucky in that you have a Mac which is pre-disposed to want to work well with AirPlay (like my two MBPs) or you may be unlucky that your Mac does not prefer to play ball (like my Mac Mini). It’s all I have at the moment, I’m afraid. I have no idea whether or not you will see the same behaviour. I will continue my experiments, albeit at a less intense level, as I am (a) running short of good ideas, and (b) have other things piling up on my plate.
… As a brief follow-up, here are some thoughts and observations regarding my AppleTV.
We have been working on evaluating BitPerfect on the latest version of Yosemite / iTunes 12.0.1, and we are coming up with a mixed bag of results. For the most part it is working quite well, but there are two areas of concern for us for the moment.
The first is with AirPlay. I have two Macs right now that have been updated to the new configuration. The first is a RMBP and the second is a headless Mac Mini. I seem to have no problems getting AirPlay to work on the RMBP, but thus far not with the headless Mac Mini. I have no idea what the problem is. I am currently updating a second, older MBP, and will see what happens with that one in due course.
The second issue is with the Console App. We use the Console Log as a valuable debugging tool, but unfortunately, under Yosemite, BitPerfect is flooding the Console with a raft of unhelpful messages. In effect, this is amounting to a Denial-of-Service attack on the Console App!! While this seems to have no obvious impact on BitPerfect’s performance, it is rendering our primary diagnostic tool almost ineffective.
More on all this as developments arise ….
One of the useful things about Apple’s App Store is that they give you some very detailed breakdowns of product sales, including by Country. To date, BitPerfect has been sold in 71 different countries, which is pretty amazing when you think about it. And it was just this week that our first customer from Pakistan joined the BitPerfect community, extending the list now to 72. [Ask yourself – can you even name 72 Countries off the top of your head?] So, whoever you are – if you are reading this – I would like to extend a warm welcome to the sole representative of Pakistan to the BitPerfect Community!
If you are interested, here are the 72 Countries:
… and …