Smpte timecode pro tools




















The original MTC specification was created by two engineers from Digidesign in - just four years after the original MIDI specification was standardised. Timeline Lynx-2 synchronisers, commonly used for synchronising professional audio and video tape machines. However, the situation is not entirely black and burst white:.

Whilst it is true that a congested MIDI path could cause some variation in the timing of MTC signals, a sufficiently well-developed MTC interpretation engine should be able to smooth these out without significant issue.

Additionally, the proliferation of fast computer interfaces available for transmitting MIDI from one system to another affords an easily-available non-congested MIDI pipe - allowing MTC messages to be transmitted with high timing resolution;. Likewise, whilst a SMPTE signal potentially offers 80 points in every frame for ensuring that the receiving unit is in sync, MTC only guarantees that there is a reliable sync point every 2 frames.

There are also a few proprietary systems available for improving the sync between two DAWs - but they are limited in their application:. Of course, this only allows the synchronisation of multiple Pro Tools systems;. Of course, this only allows the synchronisation of multiple Steinberg systems. Given that, in our experience, the composition DAW is rarely the same as the delivery DAW usually Pro Tools , we will not be testing these proprietary setups. The factors that we are going to consider here are as follows:.

Positional Accuracy : Is the audio output being delivered at the expected time in relation to the timecode output - or is, for example, the audio output being played late with respect to the timecode output? Positional Offset Variation : Is the relationship between audio output and timecode output consistent between each pass, or does the relationship vary all other factors being equal? Plug-in Delay Compensation : Does the relationship between audio output and timecode output vary according to the amount of plug-in delay present in the mixer of the composition DAW?

Audio Buffer Compensation : Does the relationship between audio output and timecode output vary according to the size of the audio buffer set in the composition DAW?

Timecode Master vs Timecode Slave : Does the composition DAW behave any differently, depending on whether it is the timecode master or timecode slave? Whilst drift can also be a concern where two systems start off in sync, but then drift apart as they continue over time , this will not be covered here: with two digital audio systems correctly clocked, drift should not be an issue as the common audio clock ensures they are always running at the same speed, and thus cannot drift apart.

The composition DAW has a constructed audio file which plays a spike every second. Running from , we should expect to see a spike occur at every seconds marker as in hours:minutes: seconds ;. The composition DAW plays out this audio file through a high-quality audio interface, into a second high-quality audio interface connected to a second computer running Pro Tools. The connection is made digitally, to avoid admittedly small timing measurement issues caused by smearing of the spike due to the reconstruction filter in the DA converter, and allowing us to easily measure down to the sample level;.

Digital vs analogue recording of the spike - the analogue recording is slightly smeared, making it more difficult to measure its alignment. Apple Logic Pro X Steinberg Cubase Pro MotU Digital Performer Avid Pro Tools Cockos Reaper 6.

We were also going to include Presonus Studio One 4 Professional 4. Although it can send MTC, it did not do so with the correct hour offset in our tests;. This test will judge how far off the target position a recording has been made, by looking at the discrepancy between the seconds markers in the Pro Tools grid and where the recorded spikes land in the timeline. In an ideal world, this figure should be as close to zero as possible. As to why there might be variations in this offset between the different DAWs: for the moment putting any possibility of bugs in the DAW aside, this could be accounted for with differing buffering practices between the applications.

It is notable that the DAW with the closest offset is Pro Tools itself: it rather makes sense that Pro Tools would have consistent handling of playback and recording such that it ends up in time with itself. Whichever setup is used though, it would need to be calibrated for correction for a given set of audio hardware and software - but this calibration should not need to be repeated for every project file used….

Digging deeper into the measurements from the previous test, this looks at the spread of offsets across the six passes for each DAW. Even in the presence of an offset in the recording, if this offset is consistent then it is easier to correct for - but a wide range of offsets can result in the timing between recorded passes being very loose. Already it can be seen that one popular DAW performs significantly more loosely than the others… This can be explained with the likely presumption that, given its MIDI sequencer heritage and persistent ancient MIDI engine architecture , Logic is running the MTC generation from its MIDI engine, which has a resolution of ticks per quarter note; at 60bpm at which our project was running , this equates to a maximum resolution of 50 samples - just as we have measured.

It is interesting to note that the tightest lock - from Pro Tools - at 7 samples variation is within the bounds of lock of a high-quality SMPTE-based synchroniser, which would typically lock to within 0. So much for the supposed inferiority of MTC…. This test is to be compared against Test 1, but with the presence of a delay-inducing plug-in in the composition DAW. What we would expect is that any delay in the audio path is mirrored by an equal delay in the timecode output - so they still emerge from the composition DAW synchronously.

For this test, we inserted a Waves L3 Ultramaximizer plug-in in the main stereo output path of the composition DAW; this adds samples of delay to the audio output - but we should still expect to see the positional offset mirror that which we found in Test 1. Did you spot the difference? This is in spite of a setting new in Logic Pro X Again, we would hope that the composition DAW correctly compensates for delays through its audio output in order to ensure that the timecode generation is still in sync.

Logic and Pro Tools appear to have not attempted to compensate at all;. Digital Performer appears to have compensated twiceover, and is delaying the timecode output by two buffer lengths instead of one…. Whilst the other four reflect their original spread within measurement error , Logic appears to have gone to the pub and left the work experience student just throwing out a timecode stream with no respect to how the audio buffer frames line up against the timecode frames.

This results in a variation between passes of 9ms, or 0. Given that an incoming timecode stream is a consistent signal once it has been correctly interpreted and locked against , it should be possible for the composition DAW to align its playback chasing such that audio output is in sync with the incoming timecode feed - accounting for mixer delays and plug-in buffers.

And with the Waves L3 Ultramaximizer added onto the stereo output:. Compared with the tests run with the DAW as the timecode master, the big difference apparent here is that Digital Performer is now not compensating for the delay on the stereo output channel - just like Logic. They also still have the sloppiest syncs. Again, Digital Performer appears to be overcompensating for the buffers and Pro Tools does not appear to be compensating at all.

But now, at least Logic appears to be significantly more on-the-beat - although its consistency is still lacking: in those terms, Logic is now falling off the barstool, whilst Digital Performer has just arrived and is now getting the next round in.

So - with the notable exceptions of Cubase and Reaper - all the DAWs under test here exhibit inconsistencies when running either as timecode master or timecode slave, apparently often due to processing buffer delays in the audio engine not being equally applied to the MTC engine.

Moving the timecode generation into the audio engine could in theory remove all the variables from the composition DAW: assuming multitrack playback is accurate from the DAW and the audio output paths are correctly timed, the click should arrive at the audio interface outputs at the same time as the timecode.

If you end up using this, be sure to throw him a monetary contribution for server costs from time to time! Once the SMPTE file has been generated and placed in the project, we can see the waveform corresponds exactly to the timeline:.

My Studio. Hello, As mentioned Nuendo has one, works well. Something we've done many times for PT users is generate a series of TC audio files using the Nuendo plug, starting them at specific times. Then the PT folks can spot to the correct TC location.

Works well. Attached Thumbnails. Top Mentioned Manufacturers. Facebook Twitter Reddit LinkedIn. Subscribe to our Newsletter. By using this site, you agree to our use of cookies. Code by Port Forward. Hosted by Nimbus Hosting. Connect with Facebook.



0コメント

  • 1000 / 1000