Georgia Hilton MPE CAS MPSE Prod/Dir/Editor

Film Services Professional ( Producing / Editing / Delivery )

Blog


view:  full / summary

FILM DOCTORS

Posted on June 21, 2011 at 2:31 PM

Check out Film Doctors. An amazing team of film makers available to help anyone make a film!

FILM DOCTORS

And don't forget to check out our production company

HILTON MEDIA MANAGEMENT


cheers

geo


fair use, copyright, license thoughts

Posted on February 28, 2010 at 2:32 PM

fair use, copyright, license, sync here's a note from another thread that got started. Thought it might be good to post it here as well.

There are discussions within fair use for educational and documentaryuse in specific circumstances that you can use 10% or 30 seconds.EXAMPLE

"Use 10% of a song, not to exceed 30 seconds, and donot show the finished video out of the classroom. Do not duplicate,distribute, broadcast, webcast or sell it. Proper attribution must begiven when using copyrighted materials. i.e. "I Am Your Child" writtenby Barry Manilow/Martin Panzer. BMG Music/SwanneeBravo Music. Theopening screen of the project must include a notice that "certainmaterials are included under the fair use exemption and have been usedaccording to the multimedia fair use guidelines". Your fair use ofmaterial ends when the project creator (student or teacher) losescontrol of the project's use: e.g. when it is distributed, copied orbroadcast"

But this is a very specific use. NOT for a commercial project.

If you are manufacturing and distributing copies of a song which youdid not write, and you have not already reached an agreement with thesong's publisher, you need to obtain a mechanical license. This isrequired under U.S. Copyright Law, regardless of whether or not you areselling the copies that you made. You do not need a mechanical licenseif you are recording and distributing a song you wrote yourself, or ifthe song is in the public domain.

Also for Film/broadcast/new media ( anything locked to picture) you need a Sync license from one of the following

http://www.ascap.com

BMI.com | Welcome

SESAC Home

U.S. Copyright Office

A music synchronization license - or sync license, for short - is amusic license that allows the license holder to "sync" music to somekind of media output. Often sync licenses are used for TV shows andmovies, but any kind of visual paired with sound requires a synclicense. A sync license gives you the right to use a song and sync itwith a visual in that when you hold a sync license, you are allowed tore-record that song for use in your project. If you want to use aspecific version of the song by a specific artist, you also need to geta master recording license. Typically, a sync license is obtained froma music publisher while the master recording license is obtained byfrom the record label or owner of the master. A sync license covers aspecific period of time, and the license will stipulate how the songcan be used. There is one flat fee involved in obtaining a synclicense, and once the license is in place, the song can be used asstipulated as many times within the license period as the licenseholder likes. In other words, if you obtain a sync license and use thesong in a film, you do not have to pay a fee on the sync every time thefilm is viewed.

Also, Master use rights are required for previously recorded material that you do not own or control.

A sample is typically the use of an excerpt of a sound recordingembodying a copyrighted composition inserted in another soundrecording. This process is often referred to as digital sampling andrequires licenses for the use of the portion of the composition and thesound recording that was re-used in the new sound recording. In someinstances, artists re-record the portion of the composition used in thenew recording and, therefore, only need to obtain a license for the useof the sampled composition.

There are occasions where FAIR USEcomes into play for Documentary and educational films... here's a PDFof some fair use issues. Lots of good fair use details here: Fair Use& Copyright: -- Center for Social Media at American University

your project, as described, does not fall under fair use doctrine.

cheers

geo

btw even Weird Al gets permissions for his parody material.

also on the comment ( here i go, into the storm)

And what happens when you want to write music that is itself a form ofcriticism? What if you want to make a literal quotation but thecopyright holder does not want to be a party to such criticism? Shouldsuch criticism be possible only with textual products and not musicalones? Or should music, too, be a viable basis of cultural criticism?

Here the courts said that there is fair use when quoting music, despite the protestations of Yoko Ono over the use of Imagine.

If we must, by default, seek maximal permission for our musicalcreations and enterprises, then we should expect "dangerous" music togo underground, and only "safe" music to be mainstream. I cannot thinkof a more insidious way to destroy the cultural value of music.

the court ruled on this based on the movie being a social commentaryabout intelligent design. ( therefore a documentary/news/educationalpiece if you will... ) The ruling has nothing to do with the tune ormusic unto itself what-so-ever... safe , dangerous or otherwise....

It's all about how something was used and with what. not the somethingitself. the whole fight could have been over a picture, or a poem, or avideo, or a document, any copyrightable widget.... it's got nothing todo directly with the song.

cheers

geo __________________

ms georgia hilton mpse cas


Film / Video post speed/frame rate issues...

Posted on February 28, 2010 at 1:05 AM

Film / Video post speed/frame rate issues... PLEASE:

Before you start a sound project find out the:

1. frame rate

2. project speed

3. post production life cycle

A. how the project was shot

B. how the project was captured

C. how the project was edited

D. how the project was exported for audio edit

E. format the client wants you to work

4. DELIVERY SPEC!!!!

The first question to ask when dealing which projects: is the pictureframe rate in sync with 48kHz. If so there will be no need to do asample rate conversion or digitize via analog sources to change thesample rate of the incoming audio signal. Otherwise check out thesevarious project paths....

1. Feature film – double system at 24fps and 48kHz audio recording for 24fps postproduction.

2. Film-based television providing sync dailies on DigiBeta (23.976 and 48kHz) for 23.976

postproduction.

3. Film or HD Production at 23.976 with single or double system audio recording for 23.976

postproduction.

4. Feature film and film-based television production at 24fps with hard disk recording at 48.048kHz for 23.976 postproduction.

1. Feature Film Double System

Most, if not all, feature film production intended for theatricalshoots film at 24fps while recording audio digitally at 48kHz. the filmis now running at 23.976fps during the telecine process in order tocreate a known 2:3 pulldown cadence to the 29.97fps video rate. Oncedigitized into a 24p project, the frames are “stamped” as 24fps inorder to play back in sync with audio captured directly via AES/EBU orBroadcast WAV files recorded at 48kHz. Because the audio was captureddigitally – either synced to work clock or imported as 48kHz – itexpects to be in sync with the picture as it was originally captured –24fps. The native sample rate of a 24p project is 48kHz and all otherrates are resolved to that during capture. When playing back at 48kHz,the audio plays back .1% faster creating a true 24fps playback from23.976 sync sources. When capturing digitally at 48kHz, no samples areconverted. It is a digital clone.

2. Film-Based Television with Sync Dailies

The transfer facility has already resolved the original shooting rateof 24fps to 23.976 and has sample-rate-converted the digital audiosources to be in sync in the digital source tapes. the audio must besample rate converted when going from 24fps to 23.976 on the video. Thepath looks like this: Picture: 24 -> 23.976 to 29.97 video creating2:3 pulldown Audio: 48kHz -> 47.952 slow down (.1%) sample corrected-> 48kHz to 29.97 video. If the editors are cutting in a 30i project(29.97 NTSC video), the audio sample rate is unchanged when capturing –it is a digital clone.

If it’s decided that postproduction will work in a 24p project, thedigitized samples are slowed to bring everything back to a true 24fps =48kHz environment.

In this case, the postproduction should be done in a 23.976 projecttype, since it assumes that the 48kHz audio sample rate is in sync withpicture playing back at 23.976fps from the DigiBeta captured sources.It has the same result than that of a film-to-tape transfer to tape.But since there is no need to speed up to true 24fps in this project,audio samples remain untouched at 48Khz throughout the postproductionprocess, through the audio mix and back to the NTSC broadcast master.Using this project type for this workflow will only go through onesample rate conversion during the film to tape transfer.

3. Film or HD Production at 23.976

shooting rate is 23.976fps because of the audio consideration when downconverting to NTSC. No one wanted to deal with a sample rate conversionin the audio when working in a fully digital environment. In a doublesystem environment, the DAT or hard disc recorder records at 48kHz. Soshooting at 23.976fps eliminates the need to do a sample rateconversion. The resulting NTSC down convert is now the same as in theprevious example where 23.976 video with 2:3 pulldown is in a Digitaltape with sync 48kHz audio.

If working double system, the DAT or BWF files from the hard diskrecorder, the 48kHz recording will come straight in with no sample rateconversion or speed change to sync with the 23.976 picture.

4. Feature Film with 48.048kHz Audio Recording

audio workflow at 23.976 with the film running at 24fps. This workflowis only for picture capture frame rate of true 24fps and a NTSCpostproduction workflow. DAT, and more common to this workflow, harddisk recorders, can record at 48.048 kHz – which is really just 48kHzwith a .1% speed up as part of the capture.

Film/24p Settings

editing systems with 23.976 project types support a 48.048kHz BWFimport workflow. If no sample rate conversion is chosen, the importedfiles are stamped as 48kHz, thus slowing them down by .1%; the sameamount that the film is slowed down during the film to tape transfer.This way no sample rate conversion is performed, and a digital audiopipeline is maintained for the postproduction process.

Capture, Edit, Digital Cut

Capture: The project type determines the native capture rate of theproject, either 23.976 or 24p. It also determines the native audiosample rate of that project that will not have a sample rate conversionor analog process involved when capturing, playing, or digital cut.

Edit: In the Film/24p settings you will see the “Edit Play Rate” aseither 23.976 or 24. This control sets the play rate of the timeline.It does not affect any of the digital cut output settings. This controllets you set a default state of frame rate for outputs that are madedirectly to tape, such as a crash record.

Digital Cut: Here you can output the timeline as 23.976, 24, or 29.97.The important thing to remember is that this is the playback speed ofthe Avid timeline, not the source tape destination. The NTSC frame rateof 29.97 cannot be changed. What is changing is the frame rate of thepicture within the NTSC signal.

1. 23.976. This creates a continuous 2:3 cadence from beginning to endof a sequence and is the expected frame rate of a broadcast NTSC masterfrom 24 frame sources.

2. 24: This is used for feature film production to create a true “filmprojected” speed from an Avid timeline on NTSC video. It is also theoutput type to use when using picture reference in a Digidesign ProTools system using OMF media from a 24p project type. Note that this isnot a continuous 2:3 cadence. Adjustments are made over 1000 frameswith the pulldown cadence. No frames are dropped, just the fieldordering with the 2:3 cadence.

3. 29.97: Timeline will play back 25% faster to create a 1:1 film frameto video frame relationship. This can be considered a 2:2:2:2 pulldowncadence. This

output is useful for animation workflow or low cost kinescope transfers where a 2:3 pulldown cannot be properly handled

Convert 60i to 24P

Use this option for standard interlaced NTSC shot at 1/60th sec shutterspeed, where you wish to edit at 24P for the purpose of transfer tofilm or to author a 24P DVD. If this option is selected, all filmeffects (widescreen, grain, red boost) will be disabled. These effectscan be added after editing.

Convert 3:2 Pulldown to 24P

Use this option for NTSC which was shot in 24P normal mode with astandard 3:2 pulldown, or with video that originated on 24 frames/secfilm, where you wish to edit at 24P for the purpose of transfer to filmor to author a 24P DVD. If this option is selected, all film effects(widescreen, grain, red boost) will be disabled. These effects can beadded after editing.

Convert 2:3:3:2 pulldown to 24P

Use this option for NTSC video that was shot in 24P with a 2:3:3:2pulldown, or 24P-NTSC archival material created with a 2:3:3:2pulldown. Convert 2:3:3:2 Pulldown to 24P is the only option that workswithout recompression of the video data.

Output 23.976 (23.98 )

Use this option to output 23.976 frames/sec Quicktime with 48000 Hzaudio, instead of 24.000 frames/sec Quicktime and 48048 Audio. Thisoption works best with editing programs that can set the timeline toexactly 23.976 frames/sec. If this option is not used, then theQuicktime's playback rate is 24.000 fps and the audio playback rate isset to 48048 Hz to keep perfect sync, and the 24.000 frames/sectimeline must be set up for 48048 Hz audio.

So find out exactly what path the production team used and find out howi was edited and finally what speed/frame rate they want you to work inand to deliver to. __________________

ms georgia hilton mpse cas

NY NY


more on cable seletion...

Posted on February 28, 2010 at 1:02 AM

more on cable seletion... This is some of ESP's comments that I find interesting and agree with from my past work in Communications, RF and engineering.

Interconnects

All well designed interconnects will sound the same. This is acontentious claim, but is regrettably true - regrettable for those whohave paid vast sums of money for theirs, at least. I will now explainthis claim more fully.

The range (and the associated claims) of interconnects is enormous. Wehave cables available that are directional - the signal passes withless intrusion, impedance or modification in one direction versus theother. I find this curious, since an audio signal is AC, which meansthat electrons simply rush back and forth in sympathy with the appliedsignal. A directional device is a semiconductor, and will act as arectifier, so if these claims are even a tiny bit correct, I certainlydon't want any of them between my preamp and amp, because I don't wantmy audio rectified by a directional cable.

Oxygen free copper (or OFC) supposedly means that there is no oxygenand therefore no copper oxide (which is a rectifier) in the cable,forming a myriad of micro-diodes that affect sound quality. The use ofOFC cable is therefore supposed to improve the sound.

Try as I might (and many others before me), I have never been able tomeasure any distortion in any wire or cable. Even a length of solder(an alloy of tin and lead) introduces no distortion, despite the resinflux in the centre (and I do realise that this has nothing to do withanything - I just thought I'd include it :-). How about fencing wire -no, no distortion there either. The concept of degradation caused bymicro-diodes in metallic contacts has been bandied about for years,without a shred of evidence to support the claim that it is audible.

At most, a signal lead will have to carry a peak current of perhaps200uA with a voltage of maybe 2V or so. With any lead, this current,combined with the lead's resistance, will never allow enough signaldifference between conductors to allow the copper oxide rectifiers(assuming they exist at all) to conduct, so rectification cannot (anddoes not) happen.

What about frequency response? I have equipment that happily goes toseveral MHz, and at low power, no appreciable attenuation can bemeasured. Again, characteristic impedance has rated a mention, and justas with speaker cables it is utterly unimportant at audio frequencies.Preamps normally have a very low (typically about 100 Ohms) outputimpedance, and power amps will normally have an input impedance of 10kOhms or more. Any cable is therefore mismatched, since it is notsensible (nor is it desirable) to match the impedance of the preamp,cable and power amp for audio frequencies.

Note: There is one application for interconnects where the sound canchange radically. This is when connecting between a turntable andassociated phono cartridge and your preamp. Use of the lowest possiblecapacitance you can find is very important, because the inductance ofthe cartridge coupled with the capacitance of the cable can cause aresonant circuit within the audio band.

Should you end up with just the right (or wrong) capacitance, you mayfind that an otherwise respected cartridge sounds dreadful, withgrossly accentuated high frequency performance. The only way tominimise this is to ensure that the interconnects have very lowcapacitance, and they must be shielded to prevent hum and noise frombeing picked up.

At radio frequencies, Litz wire is often used to eliminate the skineffect. This occurs because of the tendency for RF to try to escapefrom the wire, so it concentrates on the outside (or skin) of the wire.The effect actually occurs as soon as the frequency is above DC, butbecomes noticeable only at higher frequencies. Litz wire will notaffect your hi-fi, unless you can hear signals above 100kHz or so(assuming of course that you can find music with harmonics that go thathigh, and a recording medium that will deliver them to you). Even then,the difference will be minimal.

In areas where there is significant electromagnetic pollution(interference), the use of esoteric cables may have an effect, sincethey will (if carefully designed) provide excellent shielding at veryhigh radio frequencies. This does not affect the audio per se, butprevents unwanted signals from getting into the inputs or outputs ofamps and preamps.

Cable capacitance can have a dramatic effect on sound quality, and moreso if you have long interconnects. Generally speaking, most preampswill have no problem with small amounts of capacitance (less than 1nFis desirable and achievable). With high output impedance equipment(such as valve preamps), cable capacitance becomes more of an issue.

For example, 1nF of cable capacitance with a preamp with an outputimpedance of 1k will be -3dB at 160kHz, which should be acceptable tomost. Should the preamp have an output impedance of 10k, the -3dBfrequency is now only 16kHz - this is unacceptable.

I tested a couple of cable samples, and (normalised to a 1 metre length) this is what I found

Single Core Twin - One Lead Twin- Both Leads Twin - Between Leads

Capacitance 77pF 191pF 377pF 92pF

Inductance 0.7uH 1.2uH 0.6uH NT

Resistance 0.12 Ohm 0.38 Ohm 0.25 Ohm NT

NT - Not Tested

These cables are representative of medium quality general purposeshielded (co-axial) cables, of the type that you might use for makinginterconnects. The resistance and inductance may be considerednegligible at audio frequencies, leaving capacitance as the dominantinfluence. The single core cable is obviously better in this respect,with only 77pF per metre. Even with a 10k output impedance, this willbe 3dB down at 207kHz for a 1 metre length.

Even the highest inductance I measured (1.2uH) will introduce anadditional 0.75 Ohm impedance at 100kHz - this may be completelyignored, as it is insignificant.

The only other thing that is important is that the cables are properlyterminated so they don't become noisy, and that the shield is of goodquality and provides complete protection from external interferingsignals. Terminations will normally be either soldered or crimped, andeither is fine as long as it is well made. For the constructor,soldering is usually better, since proper crimping tools are expensive.

The use of silver wire is a complete waste, since the only benefit ofsilver is its lower resistance. Since this will make a few micro-ohmsdifference for a typical 1m length, the difference in signal amplitudeis immeasurably small with typical pre and power amp impedances. On thedown side, silver tarnishes easily (especially in areas where there ishydrogen sulphide pollution in the atmosphere), and this can become aninsulator if thick enough. I have heard of some audiophiles who don'tlike the sound of silver wire, and others who claim that solidconductors sound better than stranded. Make of this what you will :-D

The use of gold plated connectors is common, and provides onesignificant benefit - gold does not tarnish readily, and theconnections are less likely to become noisy. Gold is also a betterconductor that the nickel plating normally used on "standard"interconnects. The difference is negligible in sonic terms.

There is no reason at all to pay exorbitant amounts of hard earned cashfor the "Audiophile" interconnects. These manufacturers are rippingpeople off, making outlandish claims as to how much better these cableswill make your system sound - rubbish! Buy some good quality audiocoaxial cable and connectors from your local electronics partsretailer, and make your own interconnects. Not only will you save abundle, but they can be made to the exact length you want.

Using the cheap shielded figure-8 cable (which generally has terribleshields) is not recommended, because crosstalk is noticeably increased,especially at high frequencies. That notwithstanding, for a signal froman FM tuner even these cheapies will be fine (provided they manage tostay together - most of them fall to bits when used more than a fewtimes), since the crosstalk in the tuner is already worse than thecable. With typical preamp and tuner combinations, you might get someinterference using these cheap and nasty interconnects, but thefrequency response exceeds anything that we can hear, and distortion isnot measurable.

hope this stuff helps debunk the snake oil salespeople!

here's some good selection info:





cheers

geo


More on cable selection and design HERE's ANOTHER!

Posted on February 28, 2010 at 1:02 AM

More on cable selection and design HERE's ANOTHER!

How big should the conductors be?

The required size (or gauge) of the conductors depends on threefactors: (1) the load impedance; (2) the length of cable required; and(3) the amount of power loss that can be tolerated. Each of theseinvolves relationships between voltage (volts), resistance (ohms),current (amperes) and power (watts). These relationships are definedwith Ohm's Law. The job of a speaker cable is to move a substantialamount of electrical current from the output of a power amplifier to aspeaker system. Current flow is measure in amperes. Unlike instrumentand microphone cables, which typically carry currents of only a fewmilliamperes (thousandths of an ampere), the current required to drivea speaker is much higher; for instance, an 8-ohm speaker driven with a100-watt amplifier will pull about 3-1/2 amperes of current. Bycomparison, a 600-ohm input driven by a line-level output only pullsabout 2 milliamps. The amplifier's output voltage, divided by the loadimpedance (in ohms), determines the amount of current "pulled" by theload. Resistance limits current flow, and decreasing it increasescurrent flow. If the amplifier's output voltage remains constant, itwill deliver twice as much current to an 8-ohm load as it will to a16-ohm load, and four times as much to a 4-ohm load. Halving the loadimpedance doubles the load current. For instance, two 8-ohm speakers inparallel will draw twice the current of one speaker because theparallel connection reduces the load impedance to 4 ohms.

(For simplicity's sake we are using the terms resistance and impedanceinterchangeably; in practice, a speaker whose nominal impedance is 8ohms may have a voice coil DC resistance of about 5 ohms and an ACimpedance curve that ranges from 5 ohms to 100 ohms, depending on thefrequency, type of enclosure, and the acoustical loading of itsenvironment.)

How does current draw affect the conductor requirements of the speaker cable?

A simple fact to remember: Current needs copper, voltage needsinsulation. To make an analogy, if electrons were water, voltage wouldbe the "pressure" in the system, while current would be the amount ofwater flowing. You have water pressure even with the faucet closed andno water flowing; similarly, you have voltage regardless of whether youhave current flowing. Current flow is literally electrons movingbetween two points at differing electrical potentials, so the moreelectrons you need to move, the larger the conductors (our "electronpipe") must be. In the AWG (American Wire Gauge) system, conductor areadoubles with each reduction of three in AWG; a 13 AWG conductor hastwice the copper of a 16 AWG conductor, a 10 AWG twice the copper of a13 AWG, and so on.

But power amp outputs are rated in watts. How are amperes related to watts?

Ohm's Law says that current (amperes) times voltage (volts) equalspower (watts), so if the voltage is unchanged, the power is directlyproportional to the current, which is determined by the impedance ofthe load. (This is why most power amplifiers will deliver approximatelydouble their 8-ohm rated output when the load impedance is reduced to 4ohms.) In short, a 4-ohm load should require conductors with twice thecopper of an 8-ohm load, assuming the length of the run to the speakeris the same, while a 2-ohm load requires four times the copper of an8-ohm load. Explaining this point leads to the following oft-askedquestion:

How long can a speaker cable be before it affects performance?

The ugly truth: Any length of speaker cable degrades performance andefficiency. Like the effects of shunt capacitance in instrument cablesand series inductance in microphone cables, the signal degradationcaused by speaker cabling is always present to some degree, and isworsened by increasing the length of the cable. The most obvious illeffect of speaker cables is the amount of amplifier power wasted.

Why do cables waste power?

Copper is a very good conductor of electricity, but it isn't perfect.It has a certain amount of resistance, determined primarily on itscross-sectional area (but also by its purity and temperature). Thiswiring resistance is "seen" by the amplifier output as part of theload; if a cable with a resistance of one ohm is connected to an 8-ohmspeaker, the load seen by the amplifier is 9 ohms. Since increasing theload impedance decreases current flow, decreasing power delivery, wehave lost some of the amplifier's power capability merely by adding theseries resistance of the cable to the load. Furthermore, since thecable is seen as part of the load, part of the power which is deliveredto the load is dissipated in the cable itself as heat. (This is the wayelectrical space-heaters work!) Since Ohm's Law allows us to calculatethe current flow created by a given voltage across a given loadimpedance, it can also give us the voltage drop across the load, orpart of the load, for a given current. This can be convenientlyexpressed as a percentage of the total power.

How can the power loss be minimized?

There are three ways to decrease the power lost in speaker cabling:

First, minimize the resistance of the cabling. Use larger conductors,avoid unnecessary connectors, and make sure that mechanical connectionsare clean and tight and solder joints are smooth and bright.

Second, minimize the length of the cabling. The resistance of the cableis proportional to its length, so less cable means less resistance toexpend those watts. Place the power amplifier as close as practical tothe speaker. (Chances are excellent that the signal loss in theline-level connection to the amplifier input will be negligible.) Don'tuse a 50-foot cable for a 20-foot run.

Third, maximize the load impedance. As the load impedance increases itbecomes a larger percentage of the total load, which proportionatelyreduces the amount lost by wiring resistance. Avoid "daisy-chaining"speakers, because the parallel connection reduces the total loadimpedance, thus increasing the percentage lost. The ideal situation(for reasons beyond mere power loss is to run a separate pair ofconductors to each speaker form the amplifier.

Is the actual performance of the amplifier degraded by long speaker cables?

There is a definite impact on the amplifier damping factor caused bycabling resistance/impedance. Damping, the ability of the amplifier tocontrol the movement of the speaker, is especially noticeable inpercussive low-frequency program material like kick drum, bass guitarand tympani. Clean, "tight" bass is a sign of good damping at work.Boomy, mushy bass is the result of poor damping; the speaker is beingset into motion but the amplifier can't stop it fast enough toaccurately track the waveform. Ultimately, poor damping can result inactual oscillation and speaker destruction.

Damping factor is expressed as the quotient of load impedance dividedby the amplifier's actual source impedance. Ultra-low source impedanceson the order of 40 milliohms (that's less than one-twentieth of an ohm)are common in modern direct-coupled solid-state amplifiers, so dampingfactors with an 8-ohm load are generally specified in the range of100-200. However, those specifications are taken on a test bench, witha non-inductive dummy load attached directly to the output terminals.In the real world, the speaker sees the cabling resistance as part ofthe source impedance, increasing it. This lowers the damping factordrastically, even when considering only the DC resistance of the cable.If the reactive components that constitute the AC impedance of thecable are considered, the loss of damping is even greater.

Although tube amplifiers generally fall far short of sold-state typesin damping performance, their sound can still be improved by the use oflarger speaker cables. Damping even comes into play in the performanceof mixing consoles with remote DC power supplies; reducing the lengthof the cable linking the power supply to the console can noticeablyimprove the low-frequency performance of the electronics.

What other cable problems affect performance?

The twin gremlins covered in "Understanding the Microphone Cable,"namely series inductance and skin effect, are also factors in speakercables. Series inductance and the resulting inductive reactance adds tothe DC resistance, increasing the AC impedance of the cable. Aninductor can be thought of as a resistor whose resistance increases asfrequency increases. Thus, series inductance has a low-pass filtercharacteristic, progressively attenuating high frequencies. Theinductance of a round conductor is largely independent of its diameteror gauge, and is not directly proportional to its length, either.

Skin effect is a phenomenon that causes current flow in a roundconductor to be concentrated more to the surface of the conductor athigher frequencies, almost as if it were a hollow tube. This increasesthe apparent resistance of the conductor at high frequencies, and alsobrings significant phase shift.

Taken together, these ugly realities introduce various dynamic andtime-related forms of signal distortion which are very difficult toquantify with simple sine-wave measurements. When complex waveformshave their harmonic structures altered, the sense of immediacy andrealism is reduced. The ear/brain combination is incredibly sensitiveto the effects of this type of phase distortion, but generally needsdirect, A/B comparisons in real time to recognize them.

How can these problems be addressed?

The number of strange designs for speaker cable is amazing. Among themare coaxial, with two insulated spiral "shields" serving as conductors;quad, using two conductors for "positive" and two for "negative";zip-cord with ultra-fine "rope lay" conductors and transparent jacket;multi-conductor, allegedly using large conductors for lows, mediumconductors for mids, and tiny conductors for highs; 4 AWG weldingcable; braided flat cable constructed of many individually insulatedconductors; and many others. Most of these address the inductancequestion by using multiple conductors and the skin effect problem bykeeping them relatively small. Many of these "esoteric" cables areextraordinarily expensive; all of them probably offer some improvementin performance over ordinary twisted-pair type cables, especially incritical monitoring applications and high-quality music systems. Inmost cases, the cost of such cable and its termination, combined withthe extremely fragile construction common to them, severely limitstheir practical use, especially in portable situations. In short, theycost too much, they're too hard to work with, and they just aren't madefor rough treatment. But, sonically, they all bear listening to with anopen mind; the differences can be surprisingly apparent.

Is capacitance a problem in speaker cables?

The extremely low impedance nature of speaker circuits makes cablecapacitance a very minor factor in overall performance. In the earlydays of solid state amplifiers, highly capacitive loads (such as largeelectrostatic speaker systems) caused blown output transistors andother problems, but so did heat, short circuits, highly inductive loadsand underdesigned power supplies.

Because of this, the dielectric properties of the insulation used arenowhere near as critical as that used for high-impedance instrumentcables. The most important consideration for insulation for speakercables is probably heat resistance, especially because the physicalsize constraints imposed by popular connectors like the ubiquitous 1/4"phone plug severely limit the diameter of the cable. This requiresinsulation and jacketing to be thin, but tough, while withstanding theheat required to bring a relatively large amount of copper up tosoldering temperature. Polyethylene tends to melt too easily, whilethermoset materials like rubber and neoprene are expensive andunpredictable with regard to wall thickness PVC is cheap and can bemixed in a variety of ways to enhance its shrink-resistance andflexibility, making it a good choice for most applications. Somevarieties of TPR (thermoplastic rubber) are also finding use.

Why don't speaker cables require shielding?

Actually, there are a few circumstances that may require the shieldingof speaker cables. In areas with extreme strong radio frequencyinterference (RFI) problems, the speaker cables can act as antennae forunwanted signal reception which can enter the system through the outputtransistors. When circumstances require that speaker-level andmicrophone-level signals be in close proximity for long distances, suchas cue feeds to recording studios, it is a good idea to use shieldedspeaker cabling (generally foil-shielded, twisted-pair ortwisted-triple cable) as "insurance" against possible crosstalk formthe cue system entering the microphone lines. In large installations,pulling the speaker cabling in metallic conduit provides excellentshielding from both RFI and EMI (electromagnetic interference). But,for the most part, the extremely low impedance and high level ofspeaker signals minimizes the significance of local interference.

Why can't I use a shielded instrument cable for hooking an amplifier to a speaker, assuming it has the right plugs?

You can, in desperation, use an instrument cable for hooking up anamplifier to a speaker. However, the small gauge (generally 20 AWG atmost) center conductor offers substantial resistance to current flow,and in extreme circumstances could heat up until it melts itsinsulation and short-circuits to the shield, or melts and goesopen-circuit, which can destroy some tube amplifiers. Long runs ofcoaxial-type cable will have large amounts of capacitance, possiblyenough to upset the protection circuitry of some amplifiers, causinguntimely shut-downs. And of course there is enormous power loss anddamping degradation because of the high impedance of the cable.

BIBLIOGRAPHY

¥ Ballou, Greg, ed., Handbook for Sound Engineers: The New Audio Cyclopedia, Howard W. Sams and Co., Indianapolis, 1987.

¥ Cable Shield Performance and Selection Guide, Belden Electronic Wire and Cable, 1983.

¥ Colloms, Martin, "Crystals: Linear and Large," Hi-Fi News and Record Review, November 1984.

¥ Cooke, Nelson M. and Herbert F. R. Adams, Basic Mathematics for Electronics, McGraw-Hill, Inc., New York, 1970.

¥ Davis, Gary and Ralph Jones, Sound Reinforcement Handbook, Hal Leonard Publishing Corp., Milwaukee, 1970.

¥ Electronic Wire and Cable Catalog E-100, American Insulated Wire Corp., 1984.

¥ Fause, Ken, "Shielding, Grounding and Safety," Recording Engineer/Producer, circa 1980.

¥ Ford, Hugh, "Audio Cables," Studio Sound, Novemer 1980.

¥ Guide to Wire and Cable Construction, American Insulated Wire Corp., 1981.

¥ Grundy, Albert, "Grounding and Shielding Revisited," dB, October 1980.

¥ Jung, Walt and Dick Marsh, "Pooge-2: A Mod Symphony for Your HaflerDH200 or Other Power Amplifiers," The Audio Amateur, 4/1981.

¥ Maynard, Harry, "Speaker Cables," Radio-Electronics, December 1978,

¥ Miller, Paul, "Audio Cable: The Neglected Component," dB, December 1978.

¥ Morgen, Bruce, "Shield The Cable!," Electronic Procucts, August 15, 1983.

¥ Morrison, Ralph, Grounding and Shielding Techniques in Instrumentation, John Wiley and Sons, New York, 1977.

¥ Ott, Henry W., Noise Reduciton in Electronic Systems, John Wiley and Sons, New York, 1976.

¥ Ruck, Bill, "Current Thoughts on Wire," The Audio Amateur, 4/82.

cheers

geo __________________

ms georgia hilton mpse cas


More on cable selection and design HERE's ANOTHER!

Posted on February 28, 2010 at 1:02 AM

More on cable selection and design HERE's ANOTHER!

How big should the conductors be?

The required size (or gauge) of the conductors depends on threefactors: (1) the load impedance; (2) the length of cable required; and(3) the amount of power loss that can be tolerated. Each of theseinvolves relationships between voltage (volts), resistance (ohms),current (amperes) and power (watts). These relationships are definedwith Ohm's Law. The job of a speaker cable is to move a substantialamount of electrical current from the output of a power amplifier to aspeaker system. Current flow is measure in amperes. Unlike instrumentand microphone cables, which typically carry currents of only a fewmilliamperes (thousandths of an ampere), the current required to drivea speaker is much higher; for instance, an 8-ohm speaker driven with a100-watt amplifier will pull about 3-1/2 amperes of current. Bycomparison, a 600-ohm input driven by a line-level output only pullsabout 2 milliamps. The amplifier's output voltage, divided by the loadimpedance (in ohms), determines the amount of current "pulled" by theload. Resistance limits current flow, and decreasing it increasescurrent flow. If the amplifier's output voltage remains constant, itwill deliver twice as much current to an 8-ohm load as it will to a16-ohm load, and four times as much to a 4-ohm load. Halving the loadimpedance doubles the load current. For instance, two 8-ohm speakers inparallel will draw twice the current of one speaker because theparallel connection reduces the load impedance to 4 ohms.

(For simplicity's sake we are using the terms resistance and impedanceinterchangeably; in practice, a speaker whose nominal impedance is 8ohms may have a voice coil DC resistance of about 5 ohms and an ACimpedance curve that ranges from 5 ohms to 100 ohms, depending on thefrequency, type of enclosure, and the acoustical loading of itsenvironment.)

How does current draw affect the conductor requirements of the speaker cable?

A simple fact to remember: Current needs copper, voltage needsinsulation. To make an analogy, if electrons were water, voltage wouldbe the "pressure" in the system, while current would be the amount ofwater flowing. You have water pressure even with the faucet closed andno water flowing; similarly, you have voltage regardless of whether youhave current flowing. Current flow is literally electrons movingbetween two points at differing electrical potentials, so the moreelectrons you need to move, the larger the conductors (our "electronpipe") must be. In the AWG (American Wire Gauge) system, conductor areadoubles with each reduction of three in AWG; a 13 AWG conductor hastwice the copper of a 16 AWG conductor, a 10 AWG twice the copper of a13 AWG, and so on.

But power amp outputs are rated in watts. How are amperes related to watts?

Ohm's Law says that current (amperes) times voltage (volts) equalspower (watts), so if the voltage is unchanged, the power is directlyproportional to the current, which is determined by the impedance ofthe load. (This is why most power amplifiers will deliver approximatelydouble their 8-ohm rated output when the load impedance is reduced to 4ohms.) In short, a 4-ohm load should require conductors with twice thecopper of an 8-ohm load, assuming the length of the run to the speakeris the same, while a 2-ohm load requires four times the copper of an8-ohm load. Explaining this point leads to the following oft-askedquestion:

How long can a speaker cable be before it affects performance?

The ugly truth: Any length of speaker cable degrades performance andefficiency. Like the effects of shunt capacitance in instrument cablesand series inductance in microphone cables, the signal degradationcaused by speaker cabling is always present to some degree, and isworsened by increasing the length of the cable. The most obvious illeffect of speaker cables is the amount of amplifier power wasted.

Why do cables waste power?

Copper is a very good conductor of electricity, but it isn't perfect.It has a certain amount of resistance, determined primarily on itscross-sectional area (but also by its purity and temperature). Thiswiring resistance is "seen" by the amplifier output as part of theload; if a cable with a resistance of one ohm is connected to an 8-ohmspeaker, the load seen by the amplifier is 9 ohms. Since increasing theload impedance decreases current flow, decreasing power delivery, wehave lost some of the amplifier's power capability merely by adding theseries resistance of the cable to the load. Furthermore, since thecable is seen as part of the load, part of the power which is deliveredto the load is dissipated in the cable itself as heat. (This is the wayelectrical space-heaters work!) Since Ohm's Law allows us to calculatethe current flow created by a given voltage across a given loadimpedance, it can also give us the voltage drop across the load, orpart of the load, for a given current. This can be convenientlyexpressed as a percentage of the total power.

How can the power loss be minimized?

There are three ways to decrease the power lost in speaker cabling:

First, minimize the resistance of the cabling. Use larger conductors,avoid unnecessary connectors, and make sure that mechanical connectionsare clean and tight and solder joints are smooth and bright.

Second, minimize the length of the cabling. The resistance of the cableis proportional to its length, so less cable means less resistance toexpend those watts. Place the power amplifier as close as practical tothe speaker. (Chances are excellent that the signal loss in theline-level connection to the amplifier input will be negligible.) Don'tuse a 50-foot cable for a 20-foot run.

Third, maximize the load impedance. As the load impedance increases itbecomes a larger percentage of the total load, which proportionatelyreduces the amount lost by wiring resistance. Avoid "daisy-chaining"speakers, because the parallel connection reduces the total loadimpedance, thus increasing the percentage lost. The ideal situation(for reasons beyond mere power loss is to run a separate pair ofconductors to each speaker form the amplifier.

Is the actual performance of the amplifier degraded by long speaker cables?

There is a definite impact on the amplifier damping factor caused bycabling resistance/impedance. Damping, the ability of the amplifier tocontrol the movement of the speaker, is especially noticeable inpercussive low-frequency program material like kick drum, bass guitarand tympani. Clean, "tight" bass is a sign of good damping at work.Boomy, mushy bass is the result of poor damping; the speaker is beingset into motion but the amplifier can't stop it fast enough toaccurately track the waveform. Ultimately, poor damping can result inactual oscillation and speaker destruction.

Damping factor is expressed as the quotient of load impedance dividedby the amplifier's actual source impedance. Ultra-low source impedanceson the order of 40 milliohms (that's less than one-twentieth of an ohm)are common in modern direct-coupled solid-state amplifiers, so dampingfactors with an 8-ohm load are generally specified in the range of100-200. However, those specifications are taken on a test bench, witha non-inductive dummy load attached directly to the output terminals.In the real world, the speaker sees the cabling resistance as part ofthe source impedance, increasing it. This lowers the damping factordrastically, even when considering only the DC resistance of the cable.If the reactive components that constitute the AC impedance of thecable are considered, the loss of damping is even greater.

Although tube amplifiers generally fall far short of sold-state typesin damping performance, their sound can still be improved by the use oflarger speaker cables. Damping even comes into play in the performanceof mixing consoles with remote DC power supplies; reducing the lengthof the cable linking the power supply to the console can noticeablyimprove the low-frequency performance of the electronics.

What other cable problems affect performance?

The twin gremlins covered in "Understanding the Microphone Cable,"namely series inductance and skin effect, are also factors in speakercables. Series inductance and the resulting inductive reactance adds tothe DC resistance, increasing the AC impedance of the cable. Aninductor can be thought of as a resistor whose resistance increases asfrequency increases. Thus, series inductance has a low-pass filtercharacteristic, progressively attenuating high frequencies. Theinductance of a round conductor is largely independent of its diameteror gauge, and is not directly proportional to its length, either.

Skin effect is a phenomenon that causes current flow in a roundconductor to be concentrated more to the surface of the conductor athigher frequencies, almost as if it were a hollow tube. This increasesthe apparent resistance of the conductor at high frequencies, and alsobrings significant phase shift.

Taken together, these ugly realities introduce various dynamic andtime-related forms of signal distortion which are very difficult toquantify with simple sine-wave measurements. When complex waveformshave their harmonic structures altered, the sense of immediacy andrealism is reduced. The ear/brain combination is incredibly sensitiveto the effects of this type of phase distortion, but generally needsdirect, A/B comparisons in real time to recognize them.

How can these problems be addressed?

The number of strange designs for speaker cable is amazing. Among themare coaxial, with two insulated spiral "shields" serving as conductors;quad, using two conductors for "positive" and two for "negative";zip-cord with ultra-fine "rope lay" conductors and transparent jacket;multi-conductor, allegedly using large conductors for lows, mediumconductors for mids, and tiny conductors for highs; 4 AWG weldingcable; braided flat cable constructed of many individually insulatedconductors; and many others. Most of these address the inductancequestion by using multiple conductors and the skin effect problem bykeeping them relatively small. Many of these "esoteric" cables areextraordinarily expensive; all of them probably offer some improvementin performance over ordinary twisted-pair type cables, especially incritical monitoring applications and high-quality music systems. Inmost cases, the cost of such cable and its termination, combined withthe extremely fragile construction common to them, severely limitstheir practical use, especially in portable situations. In short, theycost too much, they're too hard to work with, and they just aren't madefor rough treatment. But, sonically, they all bear listening to with anopen mind; the differences can be surprisingly apparent.

Is capacitance a problem in speaker cables?

The extremely low impedance nature of speaker circuits makes cablecapacitance a very minor factor in overall performance. In the earlydays of solid state amplifiers, highly capacitive loads (such as largeelectrostatic speaker systems) caused blown output transistors andother problems, but so did heat, short circuits, highly inductive loadsand underdesigned power supplies.

Because of this, the dielectric properties of the insulation used arenowhere near as critical as that used for high-impedance instrumentcables. The most important consideration for insulation for speakercables is probably heat resistance, especially because the physicalsize constraints imposed by popular connectors like the ubiquitous 1/4"phone plug severely limit the diameter of the cable. This requiresinsulation and jacketing to be thin, but tough, while withstanding theheat required to bring a relatively large amount of copper up tosoldering temperature. Polyethylene tends to melt too easily, whilethermoset materials like rubber and neoprene are expensive andunpredictable with regard to wall thickness PVC is cheap and can bemixed in a variety of ways to enhance its shrink-resistance andflexibility, making it a good choice for most applications. Somevarieties of TPR (thermoplastic rubber) are also finding use.

Why don't speaker cables require shielding?

Actually, there are a few circumstances that may require the shieldingof speaker cables. In areas with extreme strong radio frequencyinterference (RFI) problems, the speaker cables can act as antennae forunwanted signal reception which can enter the system through the outputtransistors. When circumstances require that speaker-level andmicrophone-level signals be in close proximity for long distances, suchas cue feeds to recording studios, it is a good idea to use shieldedspeaker cabling (generally foil-shielded, twisted-pair ortwisted-triple cable) as "insurance" against possible crosstalk formthe cue system entering the microphone lines. In large installations,pulling the speaker cabling in metallic conduit provides excellentshielding from both RFI and EMI (electromagnetic interference). But,for the most part, the extremely low impedance and high level ofspeaker signals minimizes the significance of local interference.

Why can't I use a shielded instrument cable for hooking an amplifier to a speaker, assuming it has the right plugs?

You can, in desperation, use an instrument cable for hooking up anamplifier to a speaker. However, the small gauge (generally 20 AWG atmost) center conductor offers substantial resistance to current flow,and in extreme circumstances could heat up until it melts itsinsulation and short-circuits to the shield, or melts and goesopen-circuit, which can destroy some tube amplifiers. Long runs ofcoaxial-type cable will have large amounts of capacitance, possiblyenough to upset the protection circuitry of some amplifiers, causinguntimely shut-downs. And of course there is enormous power loss anddamping degradation because of the high impedance of the cable.

BIBLIOGRAPHY

¥ Ballou, Greg, ed., Handbook for Sound Engineers: The New Audio Cyclopedia, Howard W. Sams and Co., Indianapolis, 1987.

¥ Cable Shield Performance and Selection Guide, Belden Electronic Wire and Cable, 1983.

¥ Colloms, Martin, "Crystals: Linear and Large," Hi-Fi News and Record Review, November 1984.

¥ Cooke, Nelson M. and Herbert F. R. Adams, Basic Mathematics for Electronics, McGraw-Hill, Inc., New York, 1970.

¥ Davis, Gary and Ralph Jones, Sound Reinforcement Handbook, Hal Leonard Publishing Corp., Milwaukee, 1970.

¥ Electronic Wire and Cable Catalog E-100, American Insulated Wire Corp., 1984.

¥ Fause, Ken, "Shielding, Grounding and Safety," Recording Engineer/Producer, circa 1980.

¥ Ford, Hugh, "Audio Cables," Studio Sound, Novemer 1980.

¥ Guide to Wire and Cable Construction, American Insulated Wire Corp., 1981.

¥ Grundy, Albert, "Grounding and Shielding Revisited," dB, October 1980.

¥ Jung, Walt and Dick Marsh, "Pooge-2: A Mod Symphony for Your HaflerDH200 or Other Power Amplifiers," The Audio Amateur, 4/1981.

¥ Maynard, Harry, "Speaker Cables," Radio-Electronics, December 1978,

¥ Miller, Paul, "Audio Cable: The Neglected Component," dB, December 1978.

¥ Morgen, Bruce, "Shield The Cable!," Electronic Procucts, August 15, 1983.

¥ Morrison, Ralph, Grounding and Shielding Techniques in Instrumentation, John Wiley and Sons, New York, 1977.

¥ Ott, Henry W., Noise Reduciton in Electronic Systems, John Wiley and Sons, New York, 1976.

¥ Ruck, Bill, "Current Thoughts on Wire," The Audio Amateur, 4/82.

cheers

geo __________________

ms georgia hilton mpse cas


Audio Cable Design

Posted on February 28, 2010 at 1:01 AM

Audio Cable Design and other selection information. here'ssome research on how electricity really works... things like the skineffect, and fun stuff like that. This battle over cable designs withinthe audio realm makes me laugh out loud sometimes...

Skindepth, Litz wire, Braided conductors, and resistance

Transmission Line Theory

Here's some more trivia for your sunday reading....

What Makes a Good Audio Cable?

Criteria for what supposedly made one cable perform better or worsethan another is remarkably inconsistent. One manufacturer's claimscountered and negated the claims made by a different manufacturer. Noneof the manufacturers offer documented, measurable evidence that it wasproducing a superior cable. Instead, we find claims of allegedlysuperior components or materials used in cable construction. Forexample, a few leading manufacturers claimed that the most importantfactor for a cable was low capacitance, using the justification thatcable capacitance shunts upper frequencies to ground. In order to lowerthe capacitance, these companies increased conductor spacing tosimultaneously achieve a goal of increased inductance. This approachhad drastic side effects, however. Merely decreasing capacitancewithout taking other realities of signal transmission intoconsideration increased the noise pickup and introduced a blockingfilter. Both of these effects would obviously degrade sonic performancerather than improving it.

Another cable manufacturer advertised that its cable "employs twopolymer shafts to dampen conductor resistance", but offered no evidenceto prove it. Still another audiophile company claimed that because itscable was flat, "with no twist, it has no inductance". In general,inductance can indeed be reduced by making conductors larger orbringing them closer together. However, physics shows that, in reality,no cable can be built without some level of inductance, so this claimis without scientific merit.

Cylindrical Cable Conductors and Skin Effect

Most of the popular loudspeaker and musical instrument cables on themarket employ cylindrical (a.k.a. round-diameter) cables as conductors.Unfortunately, cylindrical cable designs have a number of seriousdrawbacks, including current bunching, skin effect phenomenon, andfrequency effects that lower the performance of the cable.

It's a common misconception to think about electrical transmission incables in terms of direct current (DC) alone. Even experiencedelectrical engineers frequently ignore the ramifications of frequencyon cable performance. In the case of DC, current is indeed uniformlydistributed across the entire cross-section of the wire conductor, andthe resistance is a simple function of the cross-sectional area. Addingthe frequency of an electrical signal to the equation complicates thesituation, however. As frequency increases, the resistance of aconductor also increases due to skin effect.

Skin effect describes a condition in which, due to the magnetic fieldsproduced by current following through a conductor, the current tends toconcentrate near the conductor surface. As the frequency increases,more of the current is concentrated closer to the surface. Thiseffectively decreases the cross-section through which the currentflows, and therefore increases the effective resistance. The currentcan be assumed to concentrate in an annulus at the wire surface at athickness equal to the skin depth. For copper wire the skin depth vs.frequency is as follows:

60 Hz = 8.5 mm, 1kHz =2.09 mm, 10 kHz =0.66 mm, 100 kHz =0.21 mm.

Note that the skin depth becomes very small as the frequency increases.Consequently, the center area of the wire is to a large extent bypassedby the signal as the frequency increases. In other words, most of theconductor material effectively goes to waste since little of it is usedto transmit the signal. The result is a loss of cable performance thatcan be measured as well as heard.

Current Bunching

Current bunching (also called proximity effect) occurs in the majorityof cables on the market that follow the conventional cylindricaltwo-conductor design (i.e., two cylindrical conductors placedside-by-side and separated by a dielectric).

When a pair of these cylindrical conductors supplies current to a load,the return current (flowing away from the load) tends to flow asclosely as possible to the supply current (flowing toward the load). Asthe frequency increases, the return current decreases its distance fromthe supply current in an attempt to minimize the loop area. Currentflow will therefore not be uniform at high frequencies, but will tendto bunch-in. The current bunching phenomenon causes the resistance ofthe wires to increase as frequency increases, since less and less ofthe wire is being used to transmit current. The resistance of the wireis related to its cross-sectional area, and as the frequency increases,the effective cross-sectional area of the wires decreases. In order toconvey the widest frequency audio signal to a loudspeaker, you want touse as much of the conductor cross-section as possible, so excessivecurrent bunching is extremely inefficient.

Disadvantages of Rectangular Conductors

As a means of bypassing the skin effect and current bunching problemsassociated with cylindrical conductor designs, some cable manufacturershave developed rectangular conductors as an alternative. These designstypically use a one-piece, solid core conductor. Computer simulationshowing the magnitude (volts/meter) of the electric field between twosolid rectangular conductors. The conductors have a cross section areaequivalent to a 10 gauge conductor. The spacing between the twoconductors is 2mm with a voltage of +1 volt applied to the topconductor and -1 volt applied to the bottom conductor.

Computer simulation showing the magnitude (volts/meter) of the electricfield between two hollow oval conductors. The conductors have a crosssection area equivalent to a 10 gage conductor. The spacing between thetwo conductors is 2mm with a voltage of +1 volt applied to the topconductor and -1 volt applied to the bottom conductor.

A solid rectangular conductor of this type is undesirable because thesharp corners produce high electric field values that over time canbreak down the dielectric, causing a failure of the cable. In general,cables with solid conductors are prone to shape distortions and kinkingdue to their poor flexibility. This becomes an especially importantissue with rectangular cable designs. The sharp corners fromrectangular conductors tend to chafe the cable dielectric if the cableis repeatedly flexed or put under stress, and this chafing can lead toa short that could conceivably damage your loudspeakers.

Characteristic Impedance Complexity

Another parameter that is critical in cable design is characteristicimpedance. But because of its complexity, this important factor isoften misunderstood.

The characteristic impedance of a cable is given by Z = [(R + jwL)/(G +jwC)]1/2 where R is the series resistance, L is the series inductance,G is the shunt conductance, C is the shunt capacitance, and w is theangular frequency (w = 2pief).

Note that this is not a simple number for a cable, but one whichchanges with frequency. It is also important to note that R, L, G, andC also change with frequency, making the impedance of a cable even morefrequency dependent.

milli-ohm/loop 100 ft . Z is a complex number, and it is commonpractice in the cable industry to simplify the situation by assuming aloss less transmission line and, in turn, assuming that R and G arezero. While this may be a valid approximation at high frequencies, itis not valid at low audio frequencies if you plan to construct anaccurate model of a cable.

For example, stating that a speaker cable has a constant,characteristic impedance of 10 ohms across the entire frequency rangeof 20 to 20,000 Hz is a drastic oversimplification that, in the end, issimply untrue. The same type of statement is also inaccurate whenapplied to loudspeakers, as the table below shows. A speaker only has aconstant impedance of 8 ohms at a single fixed frequency. To stateotherwise is to ignore the complexity of impedance changes as signalfrequency changes.

Frequency Blurring

To minimize frequency blurring, it is important that the speaker cableparameters do not change with frequency. Ideally, the resistance andinductance would remain constant as the frequency of the signalchanges.

The faintest sound wave a normal human ear can hear is 10(-12) Wm(-2).At the other extreme of the spectrum, the threshold of pain is 1Wm(-2). This is a very impressive auditory range. The ear, togetherwith the brain, constantly performs amazing feats of sound processingthat our fastest and most powerful computers cannot even approach.

As long ago as 1935 Wilska 2 succeeded in measuring the magnitude ofmovement of the eardrum at the threshold of audio sensitivity acrossvarious frequencies. At 3,000 Hz, it takes a minimal amount of eardrumdisplacement (somewhat less than 10-9 cm or about 0.01 times thediameter of an atom of hydrogen) to produce a minimal perceptiblesound. This is an amazingly small number! The extremely small amount ofacoustic pressure necessary to produce the threshold sensation of soundbrings up an interesting question. Does the limiting factor in hearingminimal level sounds lie in the anatomy and physiology of hearing or inthe physical properties of air as a transmitting medium? We know thatair molecules are in constant random motion, a motion related totemperature. This phenomenon is known as Brownian movement and producesa spectrum of thermal-acoustic noise.

In 1933, Sivian and White3 experimentally evaluated the pressuremagnitudes of these thermal sounds between 1kHz and 6 kHz. Theyobserved that throughout the measured spectrum the root-mean-squarethermal noise pressure was about 86 decibels below one dyne per squarecentimeter. The minimum root-mean-square pressure that can produceaudible sensation between 1 kHz and 6 kHz in a human being with averagehearing is about 76 decibels below one dyne per square centimeter, butin some people with exceptionally acute hearing may approach 85decibels. These figures indicate that the acuity of persons possessinga high sensitivity of hearing closely approaches the thermal noiselevel, and a particularly good auditory system actually does approachthis level. Furthermore, it is not likely that animals possess greateracuity of hearing in this spectrum, as their hearing would also belimited by thermal noise. What this means is that the human audiosystem is extremely sensitive.

References

1 Henry W. Ott, Noise Reduction Techniques in Electronics System (New York, NY John Wiley and Sons, 1988, p. 150)

2 Wilska, A.: Eine methode zur Bestimmung der Horschwellenamplitudendes Trommelfells bei verschiedenen Frequenzen, Skandinav. Arch.Physiol., 72:161, 1935.

3 Sivian, L.J., and White, S.D.: On minimum audible sound fields, J. Acous. Soc. Am., 4:288, 1933 __________________

ms georgia hilton mpse cas


tri-level bi-levle sync SD and HD are 2 different standards.

Posted on February 28, 2010 at 1:01 AM

tri-level bi-levle sync SD and HD are 2 different standards. They require two different types of sync.

SD bi-sync and HD tri-sync are different signals.

SD Bi-sync supports computer video, composite video, s-video, andcomponent video, by using 2 voltage levels ( high and low )... systemsusing bi-syn are triggered by the differential voltage at the leading (negative) edge of the signal. Bi-sync basically runs at 0 votage (black) then drops to a negative voltage and then back to zero ( 2states or BI-sync)

Tri-sync provides a more exacting sync for the 3 component signals. HDhas sync info on all three channels ( Y, Pb, and Pr). Tri sync startsat 0 volts ( video black ) then goes negitive, then a positive voltage,then back to 0 volts ( 3 states, Tri-Sync). This also fixed theintroduction of a voltage differential by bi-sync signals into theactual video signal. the tri-sync signal is triggered by the on thenegative transition, then on the positive transition. thus0,-300mv,+300mv,0 3 sync points... This all happens in the Horizontalblanking interval. There is something called the White reference levelthat is set at 700mv. the 700mv drops to 0 at the start of the blankingperiod. the signal drops to -300mv, the to +300mv, then to 0 the backto reference white 700mv. This is the HD Analog horizontal timing datawhere HD tri-level sync is used. ( I don't remember what the whitelevel is for bi-sync.. sorry pulling most of this out of my head... )There are also a couple of issues revolving around the Progressive vsinterlaced HD frame timing that comes into play here.

bi-sync is on the Y-signal on SD video. tri-sync is on all 3 channels for HD. ( check out SMPTE 274M ad SMPTE 296M )

so the two are incompatable for sync .. there are a lot of options out there for devices that support both bi-sync and tri-sync.

the sync IO accepts SD BiSync or "traditional" video black, the HDSyncIO accepts Tri-sync signals. As far as gear it all depends on what youare syncing... if its protools to an HD video deck and you are layingback audio, you can use a tri-sync box that offers bi-sync. The bisynccan be routed to the protools rig and the tri-sync can be used for theHD deck. If you are dealing with laying back an HD video signal from avideo editing system with an AJA Kona3 card to an HD video deckdirectly connected, you don't need

a tri-sync gen, because the AJA kone3 offers this to the deck via thevideo signal (read the Kona3 card for details, i don't remember exactlyhow they do it, but they do) if you are using a euphonix system 5 HDMIIO and a HD video deck you need a tri-sync generator. So it all dependson what you are trying to sync.

I hope this helps a bit with some of the tech background.

cheers

geo __________________

ms georgia hilton mpse cas


Stereo downmixes (or fold-downs)

Posted on February 28, 2010 at 1:00 AM

Stereo downmixes (or fold-downs)

Left total/Right total (Lt/Rt)

Lt/Rt is a downmix suitable for decoding with a Dolby

Lt = L + -3dBC - -3dB(Ls + Rs)

Rt = R + -3dBC + -3dB(Ls + Rs)

(Ls,Rs 90-degree phase shifted)

Left only/Right only (Lo/Ro)

Lo/Ro is a downmix suitable for stereo playback

Lo = L + -3dBC + attLs

Ro = R + -3dBC + attRs

(att=-3dB or -6dB or -9dB or 0)

5.1 downmix

5.1 to LR Ls to L drop 6 db (.5) , Rs to R drop 6 db (.5), C to L & R drop 3db (.707) , lose the sub.

or

Lt = FL + s*SL + c*C

Rt = FR + s*SR + c*C

where s (=surround mix) is usually something between 0.5 and 1

and c (=center mix) is usually 0.7. This would be a "normal" stereo downmix.

or

Lt = FL + s*(SL+SR) + c*C

Rt = FR - s*(SL+SR) + c*C // 180° phase shift SL+SR

with s=0.5 and c=0.7.

cheers

geo __________________

ms georgia hilton mpse cas


Some info on TV broadcast Tim Carol wrote this...

Posted on February 28, 2010 at 12:56 AM

Some info on TV broadcast Tim Carol wrote this...

BEYOND DOLBY DIGITAL

There is little sense in having an emission coding system such as DolbyDigital (AC-3) that can carry 5.1 channels of audio if you can't get5.1 channels to the encoder. In 1996, all commonly used VTRs had onlyfour audio channels; servers could in theory do more but were notgenerally configured that way, and digital plants rarely had more thantwo AES pairs available for routing. Once the Dolby Digital (AC-3)system was in place as part of the ATSC standard, Craig Todd, LouisFielder, Kent Terry and others at Dolby began to investigate ways toefficiently distribute the multiple channels of audio-they foresawissues that were still a few years away from becoming a really bigproblem.

So what is Dolby E? Contrary to some rumors, it is not high-rate DolbyDigital (AC-3). This approach was considered, but there were too manybenefits to be had for starting over with a different set of goals.What I mean by that is the goal of the Dolby Digital (AC-3) system isto deliver up to 5.1 channels of audio to consumers using the fewestbits possible while still preserving excellent audio quality. As wewill see, this is not the goal of Dolby E. To meet this goal, the DolbyDigital (AC-3) encoder is rather complex and takes about 187milliseconds from the time it receives audio until the time it producesa Dolby Digital (AC-3) output. This is analogous to the video encodingprocess-high-quality but low bit-rate means the encoder is going toneed processing time. Although this encoding latency is small (far lessthan video encoding latency) and is taken into account in themultiplexer, this amount of delay is difficult to deal with inproduction and distribution.

Also, while the audio quality of Dolby Digital (AC-3) is very good, itwould not be appropriate to use it for multiple encode/decode cycles.This might enable coding artifacts to become audible. I say mightbecause the artifacts are unpredictable-sometimes you might hear themwith certain material, sometimes you might not. Again, high-rate DolbyDigital (AC-3) minimizes the chance of this occurring, but it could.

Another drawback of using Dolby Digital (AC-3) for distribution is thatalthough its data is packetized into frames, these frames do notregularly line up with video frames (see Fig. 1).

You might be thinking, "PCM audio is packetized into AES frames that donot line up exactly with video frames either so what is the problem?"Good point, but Dolby Digital (AC-3) frames carry bit-rate reduced(i.e. compressed) audio, not baseband audio. Although a video editwould cause little problem for baseband audio, cutting a mid-DolbyDigital (AC-3) frame will cause major problems. After decoding, theresults will be audio mutes if you are lucky, clicks and pops if youare not. Dolby Digital (AC-3) is simply not intended to be used thisway.

This did not stop some early adopters, however, and at least one majorDBS provider used Dolby Digital (AC-3) recorded on one of the AES pairsof a Digital Betacam recorder to carry the 5.1 channel audio of movies.Did it work? Absolutely, and even when the Digital Betacam tapes werenot long enough to hold an entire film and the Dolby Digital (AC-3)stream had to be switched mid-movie, it hardly ever caused a glitch.They were lucky! It can be done, but it is not easy and is notrecommended. Clearly, it was time for a new system designedspecifically for the task.

A TALL ORDER

Some of the original goals for this new system were that it had to bevideo frame-bound so that it could be easily edited or switched, had tobe able to handle multiple encode/decode cycles (at least 10) whilecausing no audible degradation, had to carry eight channels of audioand metadata, had to fit into a standard size AES pair of channels andhad to do its encoding and decoding in less than a video frame.

Dolby E satisfies this tall order

Sponsored Link

Clear-Com combines the reliability of traditional TDM-based intercomswith the flexibility and affordability of groundbreaking I.V.Coretechnology.

. The system will accept up to eight channels of baseband PCM audio andmetadata and fit them onto a single 20-bit, 48kHz AES pair (i.e., 1.92Mbps), or it will fit six channels plus metadata into a 16-bit, 48kHzAES pair (i.e., 1.536 Mbps). After decode, PCM audio and metadata areoutput to feed a Dolby Digital (AC-3) encoder.

Fig. 2

In Fig. 2 you can see how Dolby E frames match video frames. Althoughonly NTSC and PAL rates are shown, the system will also work with23.976 and 24 fps material.

Notice the small gap between the Dolby E frames. This is called theGuard Band and is a measurable gap in the bitstream. Switching hereallows a seamless splice of the audio signals. In fact, upon decode,the audio is actually crossfaded at the switch point, which is aremarkable feat for compressed audio.

As the goal for the Dolby E bit-rate reduction algorithm was to enablemultiple encode/decode cycles or concatenations, the audio quality ismaintained for a minimum of 10 generations. This does not mean that at11 generations the audio falls apart, but rather that absence of anyartifacts is no longer guaranteed. I was one of the listening testparticipants at Dolby during the development of the "E" algorithm. Ispent two rather unpleasant afternoons listening to all kinds of audiosamples both pre- and post-ten generation Dolby E. I consider myself acritical listener, but those were some of the hardest listening tests Ihave ever participated in. This was not like comparing apples andoranges, more like comparing two perfect apples-one was ever soslightly different than the other. In a word: Maddening!

WATCH OUT

Dolby E, like Dolby Digital (AC-3), is carried on a standard AES pair.It can be recorded, routed and switched just like standard PCM audio inan AES pair (see Fig. 3).

Fig. 3

However, there are some strict requirements. The path for the AES pairmust be bit-for-bit accurate. This means that there can be no levelchanges, sample rate converters, channel swaps or error concealment inthe path. Remember that although the Dolby E data is in an AES pair, itis not audio until it is eventually decoded. Any processing that causeschanges in the data will destroy the information. These "Gotchas" arehidden everywhere, especially sample rate converters, so be prepared toreally evaluate your facility. An invaluable resource is the Dolby Epartner program, run by Jeff Nelson at Dolby. You can find a bunch ofvery useful information at www.dolby.com/tech/dolbyE_prtnr_prgrm.html. Manufacturers and individual products that have been tested to pass Dolby E are listed here.

When baseband audio is not possible, Dolby E has become the de factostandard. Since it began shipping in September 1999, more than 1,000encoders and decoders have been sold, and the system is the source forvirtually all 5.1 channel Dolby Digital (AC-3) broadcasts here andabroad.

One last point. The question I was probably asked most often was: "Whyis it called Dolby E?" Simple: "E" comes after "D." So, as Steve Lymanlikes to say "Dolby E is for Distribution and Dolby D (i.e. AC-3) isfor Emission."




Cheers

geo Attached Images     __________________

ms georgia hilton mpse cas


Some mixing info from Danijel... cool stuff. Standard Mixing Levels for Movie Theater, DVD, TV, Radio and Games

Posted on February 28, 2010 at 12:56 AM

Some mixing info from Danijel... cool stuff. Standard Mixing Levels for Movie Theater, DVD, TV, Radio and Games

This post should serve as a little guide to the resources availableon-line on the topic of audio levels in different media. It has beencompiled due to the big frequency of questions on the topic, and thanksto the big amount of answers in this forum!

Since audio in media is an ever-changing field, this post will beupdated as I stumble upon new and interesting infos or links. If youhave insight into data that you think should be included or corrected,please PM me, or post it here.

For further, specific questions on mixing levels, you can post in this thread, or start a new one.

Movie theater

There are no guidelines in terms of average loudness, peak or any otherlevel measurement. You achieve proper levels by properly calibratingyour listening environment, so that it resembles the environment of thetheater.

To calibrate your room, read this:

DUC: Room Calibration for Film and TV Post

(or, in a nutshell)

Then mix by ear. "If it sounds good, it is good" - JoeMeek.

Here's a useful discussion:

FILM & Broadcast - Levels

However, there is a maximum loudness level for theatrical trailers andcommercials which is measured with the Dolby Model 737 SoundtrackLoudness Meter.

Trailer loudness should not exceed 85 dB Leq(m), as regulated by TASA.

Commercial loudness should not exceed 82 dB Leq(m), as regulated by SAWA.

DVD

Here, same rules apply as with the theatrical mix, except that themonitoring is different (near-field, no X-curve), the room is smaller,it is calibrated lower, AND there is the dialnorm parameter if yoursound is AC3 encoded.

Read about dialnorm here:

Geo's sound post corner (section about Dialogue Level)

and here:

Home Theater Hi-Fi: Dialogue Normalization

You have to determine your target dialnorm BEFORE you start mixing, soyou can adjust your listening level accordingly. Most DVD's are mixedfor dialnorm -27dB (because that setting is the most compatible withthe theatrical mix), but some use the full dynamic range (-31dB).

TV (everything BUT commercials)

Every broadcaster has it's own specs. You have to get the specs of your target TV channel.

Detailed Specs

They can be very detailed, like the Discovery specs or the PBS specs(section 3). They will tell you exactly what is your max peak level,average dialogue level, average overall level, what measurementinstrument is to be used etc. Meter that the networks usually specifyis Dolby LM100.

Here are two threads about mixing against LM100:

Mixing with the Dolby LM100

Anyone have experience mixing while adhering to specs monitored by the Dolby LM 100?

This is great! A post by Mark Edmondson, Audio Post Production Supervisor at Discovery:

Dolby LM100 and Discovery deliverables - Digi User Conference

Basic Specs

The other extreme is on the minimalistic side, like the RTL or BBCspecs which give you only the maximum peak level, and the referencelevel. This is what it's like in most of Europe, AFAIK (if you havesome bogus specs to share, please post some links).

- REFERENCE LEVEL - it is used for equipment alignment, and doesn't have a direct relation to actual mixing levels.

In EBU countries it is -18dBFS and corresponds to electrical level of 0dBu (per EBU R68).

In SMPTE countries it is -20dBFS and corresponds to electrical level of +4dBu (per SMPTE RP155).

Sometimes refered to as: Zero level, Line-up level, 0VU.

Broadcast Audio Operating Levels for Sound Engineers

Reference Levels on Common Metering Scales

The Ins and Outs Of (Sound on Sound)

- MAXIMUM PEAK LEVEL - this is where you set your brickwall limiter onthe master buss, or otherwise not go over it (although in some of thespecs, short peaks of 3 to 5 dB over this value are allowed - gofigure!).

What can make the confusion here is that the average dialogue level is not exactly specified.

In a perfect world, you would calibrate your listening environment tothe ITU-R BS.775-1 standard (-20dBFS pink @79dB SPL/C/slow), [or EBU3276 and EBU 3276-S if you are in Europe] and then mix by ear. In thatcase you would get average dialogue levels at around -27dBFS RMS.

However, this way, your mix could turn out too quiet, as there's aloudness war in broadcasting, probably in part due to the loudness ofcommercials and the loudness war in music. (e.g. PBS has upped theirdialnorm from -27dBFS to -24dBFS in 2007).

Average dialogue loudness that works for me (dramatic program, regionalstations in the Balkan peninsula) is -22dBFS RMS. To achieve that, Icalibrate my monitoring to 74dB, and thus reduce the headroom by 5dBwhen compared to the ITU's 79dB reference.

However, your best bet is to talk to someone who regularly delivers forthe given broadcaster or in a given market, and ask him about hisaverage dialogue level, or how his listening is calibrated. Chances aresomeone at this forum will be able to help, too.

Further Reading

More about broadcast delivery specs:

Geo's sound post corner

A great intro to broadcast audio:

Audio for Digital Television

All this and much more:

CAS Seminars - 'What Happened to My Mix?' - The Work Flow From Production Through Post Production - Cinema Audio Society

Dialnorm was to be implemented in broadcast too (as Dolby imagined), but it isn't, so far:

DTV Audio: Understanding Dialnorm

Food for thought on setting up variable monitoring level:

Bob Katz - Level Practices

TV commercials

Again, you have to get the specs of your target TV channel, but youwill most likely only use the max peak value they provide. Below that,you can compress as much as you wish - it's a loudness war, similar tothe one in popular music production.

There are efforts in regulating this problem:

US: H.R. 6209: Commercial Advertisement Loudness Mitigation Act (GovTrack.us)

UK: UK commercials for TV - perceived loudness issue - Digi User Conference

Radio

I can't say much about radio levels, so perhaps someone who is experienced with radio could chime in.

Here's a BBC technical specification, but I don't know how much it applies to different radio stations:

BBC Radio Resources // Programme Delivery // Glossary

Less is more (straight from the horse's mouth) - Bob Orban talks about what goes on with your mix in the radio station:

Radio Ready: The Truth

Games

Absence of standards:

Video Game Reference Level/Standards

THX: Establishing a Reference Playback Level for Video Games

A thread at SDO with some advice and some official information from Sony and Microsoft:

Niveau Sonore en jeux vidéo :: Sound Designers.Org (Babelfish English translation)

(Note: the Xbox360 document is in English)

cheers

geo __________________

ms georgia hilton mpse cas


Dolby Surround LtRt info DOLBYsurround

Posted on February 28, 2010 at 12:55 AM

Dolby Surround LtRt info DOLBYsurround. LtRt Left total, Right total vs LoRo L only R only.. itstarted as a consumer version of Dolby's multichannel film format.

LtRt is a L C R S ( left center right and mono surround ) mix that isencoded down to a , and i'm going to use this very loosely, "stereo"mix. The "stereo" mix is really the four tracks encoded to 2 tracks.Then upon playback the LtRt can be heard in just plain mono or stereoor decoded back to L C R S. LtRt or Dolby Surround was the first ofDolby's multichannel film format. ( 1980 something ) The encoded filefits nicely on VHS or other 2 track device and then you get surroundoff of these 2 track systems.

The encoding process relies heavily on phase to encode and decode the positional information within the mix.

you can use a DOLBY SEU 4 or Dolby DP563 hardware devices to encodeLtRt, or you can create an LtRt optical track starting with a Dolby DMUor LtRt files... You can use the DOLBY surround tools in protools tocreate LtRt ( both encode and decode )...

remember that an LtRt track is just analog audio vs an AC3 or Dolby-Eencode which is a digital bit stream. You can create an AC3 encodeusing only L C R S very similar to an LtRt for DVDs with A-Pack orother AC3 encoder, but again it will create a digital bit stream, notan analog audio file.

to mix... just mix in L C R S... paying close attention to any couplingor phase issues with similar materials in multiple speakers... music,ambience etc. then downmix to LtRt and LoRo and mono checking each forpositing of material. When mixing you can use the Dolby Surround toolsto create 2 stereo aux tracks with the encoder on them Master( L,R )slave( C,S ) and them place the decoder (master, slave) after theEncoder so that you can monitor thru the encoder/decoder and simply hita button to hear whats going on in LCRS, LR, mono... the details willbecome obvious when you look at the software plugins...

cheers

geo

PS: i think today the whole this is called DOLBY ANALOG by Dolby...sorry I haven't had my morning coffee so this reply is a littledisjointed.. ;)

cheers

geo

some additional from Neil Wilkes across the pond...

There is Dolby ProLogic and Dolby ProLogic II, which are utterlydifferent systems despite the incremental naming protocol - DPL hasdual mono rear channels & feeds the same information to both Ls& Rs and DPL II uses proper separate Ls/Rs where available. Inaddition, all DPL II en/decoders are fully backwards compatible withthe earlier systems.

SRS Circle Surround is also a Matrix Lt/Rt technology.

Lt/Rt is basically a generic term taken to mean any multichannel mixthat is matrixed down into a stereo compliant stream & will playback on a stereo system, but if the stream is fed through the correctdecoder you will get the surround mix back out of it.

There are Pros & Cons to this.

Pros -

1 - you get to supply a single stream that will play back in stereo if no surround setup is present.

2 - It is widely used in TV/Broadcast.

3 - You can even take the Lt/Rt stream & further reduce to AC3(Dolby Digital) as long as you set the metadata flag (in a softencoder) or push the switch/button (in a hardware encoder) for "DolbySurround Encoded"

Cons -

1 - It is a compromise for both types. If you start with a fullydiscrete 5.1 mix, then it will *not* sound the same when decoded again.Neither will the stereo mix be as good as one that was especially mixedfor stereo.

2 - It's a matrix system - not discrete - and as Georgia points out there are all manner of possible pitfalls involved.

3 - It makes delivering an M&E mix much more complex as you wouldneed to completely reset the encoder resulting in a vastly differentsounding mix on a dubbed language version.

When I have to deliver an Lt/Rt stream I use a VST plugin en/decoder sothat switching between the various modes is extremely easy(source/encoded/decoded).

A good place to start research on this is at the Dolby Labs technical library where you will find a lot of helpful material. __________________

ms georgia hilton mpse cas


Some Misc troubleshooting for APPLE and PROTOOLS

Posted on February 28, 2010 at 12:55 AM

Some Misc troubleshooting for APPLE and PROTOOLS Some Troubleshooting tips for APPLE based PROTOOLS system that I've collected:

Pro Tools Tech Support Folder , tools and utilities to assist in diagnosing and troubleshooting is here: Digidesign | Support | Tech Support Folder - Utilities and Troubleshooting Tools

Other house keeping items:

Delete Pro Tools preferences.

Go to Users > “your user name” > Library > Preferences

Delete 'com.digidesign.protoolsLE.plist', 'DAE Prefs' (folder), 'DigiSetup.OSX' and 'Pro Tools preferences'.

Empty trash, then restart the computer.

You can also use the Pro Tools Preference and Database Helper to trashpreferences and databases automatically, as well as providing a fewother useful tools. (This application is provided by a 3rd party and isnot tested or supported by Digidesign)

Repair Permissions

-Quit Pro Tools and launch Apple's "Disk Utility" application, located in:

MacHD>Applications & Utilities.

-Select your boot drive (the whole drive, not the volume underneath the drive)

-Go to the 'First Aid' tab and select "Repair Disk Permissions"

Apple recommends doing this any time you install new software, update your OS or reinstall any software.

Databases and Volumes

This step can be useful when receiving random 'assertion' or 'neoaccess' errors, especially when recording or saving.

-Delete the "Digidesign Databases" folders on the first level of all mounted hard drives.

-Delete the "Volumes" folder:

Pro Tools versions 7.3 and earlier it's located in MacHD > Library > Application Support > Digidesign > Databases.

Pro Tools 7.4 and higher will find it in MacHD > Library >Application Support > Digidesign > Databases > Unicode.

-Empty trash, then restart the computer.

Compatibility

First verify you are using a supported Mac and your system meets all ofthe minimum requirements. You can find compatibility information(computers, hard drives, operating systems, requirements, etc.) in theSupport section of the website:

Digidesign | Support

Choose from the product list for compatibility information, and clickon the links in the compatibility section for each product foradditional information.

- Verify that you have the Minimum or Suggested amount of Ram loaded in your system

For current Pro Tools systems, that is the following:

1 GB (1024 MB) or more highly recommended. The best user experience has been reported using 2 GB or more.

- Verify that your version of the Mac OS is supported with your version of Pro Tools:

Pro Tools LE Version Compatibility Grid for Mac OS X

More Information:

Mac OS X 10.4 Requirements with Pro Tools 6 & 7

Mac OS X 10.2 & 10.3 Requirements with Pro Tools 6

- Make sure all drives are formatted Mac OS Extended (Journaled) withOS X’s Disk Utility. Pro Tools can not use UFS or HFS volumes. If thedrive was originally formatted in OS 9 or with any other application,backup the drive and reinitialize it with Disk Utility. Moreinformation:

Hard Drive Requirements - Pro Tools LE for Mac OS X

IMPORTANT! You MUST use a secondary hard drive (not your main OS drive)for recording and playback of audio in Pro Tools. Recording or playbackfrom the OS drive is known to be problematic and the cause of manydifferent error types. If you are using your system drive andencountering errors, the first thing you should do is get a compatibledrive.

Pro Tools supports recording and playback to a secondary drive that meets the following requirements:

7200 rpm or faster

9ms seek time or faster

Firewire drives must have the Oxford 911 (FW400 port), Oxford 912(FW400 & FW800 ports) or Oxford 924 (FW800 ports) Bridge chip. USBdrives are not supported and are known to be problematic. Here are someuseful links to determine if the drive you have, or are considering,has the proper chipset:

Chipsets used in Maxtor external drives

Chipsets used in Seagate external drives

Pro Tools FireWire Drive Requirements on Mac OS X

Pro Tools EIDE/ATA Hard Drive Requirements on Mac OS X

Pro Tools SATA Hard Drive Requirements on Mac OS X

Pro Tools SCSI Hard Drive Requirements on Mac OS X

- If you are running a supported ATTO card make sure you install theATTO Configuration Tool, this is a different application from Express.It can be found on the Pro Tools CD installer or in the downloadssection. This application installs necessary extensions for allsupported ATTO cards. More information:

Qualified SCSI HBA Cards — Pro Tools Systems for Mac OS X

General Setup

Pro Tools Technical Support, Registration and Setup Videos

These videos have information on the following topics:

Product Registration

Technical Support - Searching Answerbase and getting answers to your questions

Pro Tools System Compatibility - How to determine if your system is compatible

Mbox 2 family - Information on what's in the box

Authorizing Ignition Pack 2

- In System Preferences > Display, set your monitor resolution to a minimum of 1024 X 768.

- In System Preferences > Classic > Start/Stop tab, Uncheck Start Classic when you log in.

- In System Preferences > Date & Time, verify that the date is set correctly and that you are not using 24 hour time.

- If you are using a 2005 or newer Powerbook G4 with Sudden MotionSensor, please disable SMS according to the Apple information locatedhere:

Sudden Motion Sensor and video editing performance

Sudden Motion Sensor: Advanced Tips

Energy Saver

-Open System Preferences (Located in Apple Menu, Dock, or Applications Folder)

-Click on Energy Saver (in the Hardware section)

-Set the "Sleep Sliders" to Never (Computer Sleep) and Never (Display Sleep)

-Make sure the box next to "Put the hard disk(s) to sleep when possible" is unchecked

-Click on the options tab at the top

-If you have a "Processor Performance" drop down menu select "Highest"

AirPort

-Click on the AirPort Icon in the Menu Bar in the upper right corner of the screen (Left of the time)

-Select "Turn AirPort Off" from the menu

-If you don't see the AirPort Icon in the Menu Bar then:

-Open System Preferences (Located in Apple Menu, Dock, or Applications Folder)

-Click on "Network"

-Click on the "Show" drop down menu and select "AirPort" (if there isno AirPort option, then AirPort is not installed on your computer)

-Under the "AirPort" tab, towards the bottom of the window, check the box next to "Show AirPort status in menu bar"

-Follow the first two steps after that is done

Bluetooth

-Open System Preferences (Located in Apple Menu, Dock, or Applications Folder)

-Click on "Bluetooth" under the Hardware section

-Click on the "Settings" tab at the top of the screen

-Make sure "Bluetooth Power: Off" if not click the "Turn Bluetooth Off" Button

Firewire Networking

- Open System Prefs (Located in Apple Menu, Dock, or Applications Folder)

- Click on 'Network'

- In the drop-down box next to 'Show', select 'Network Port Configurations'

- Uncheck the box next to 'Built-in Firewire'

- Click on 'Apply Now' button in the lower right hand corner of the window.

Pace Drivers

You want to make sure you have the most current version of these drivers.

-Visit the PACE website at:

Welcome to PACE Anti-Piracy

-Below where it says "END USERS" on the right side of the page, there is a 'Download Drivers' drop down menu

-Click on the menu and select 'Mac OS X Extensions'

-This should start downloading 'macextsx.dmg'

-Once this file is downloaded, double click on 'macextsx.dmg'

-This should bring up a temporary disk called 'InterLok Extensions Installer'

-Open the 'InterLok Extensions Installer' disk, and double-click 'InterLok Extensions Install'

-Proceed through the installation

-Restart your computer when installation has finished

Troubleshooting tips

Please download and install the Pro Tools Tech Support Folder , whichhas tools and utilities to assist in diagnosing and troubleshootingpotential problems you may encounter.

Delete Pro Tools preferences.

-Go to Users > “your user name” > Library > Preferences

-Delete 'com.digidesign.protoolsLE.plist', 'DAE Prefs' (folder), 'DigiSetup.OSX' and 'Pro Tools preferences'.

-Empty trash, then restart the computer.

You can also use the Pro Tools Preference and Database Helper to trashpreferences and databases automatically, as well as providing a fewother useful tools. (This application is provided by a 3rd party and isnot tested or supported by Digidesign)

Repair Permissions

-Quit Pro Tools and launch Apple's "Disk Utility" application, located in:

MacHD>Applications & Utilities.

-Select your boot drive (the whole drive, not the volume underneath the drive)

-Go to the 'First Aid' tab and select "Repair Disk Permissions"

Apple recommends doing this any time you install new software, update your OS or reinstall any software.

Databases and Volumes

This step can be useful when receiving random 'assertion' or 'neoaccess' errors, especially when recording or saving.

-Delete the "Digidesign Databases" folders on the first level of all mounted hard drives.

-Delete the "Volumes" folder:

Pro Tools versions 7.3 and earlier it's located in MacHD > Library > Application Support > Digidesign > Databases.

Pro Tools 7.4 and higher will find it in MacHD > Library >Application Support > Digidesign > Databases > Unicode.

-Empty trash, then restart the computer.

Uninstall and reinstall Pro Tools

-Run the Pro Tools installer from your CD-ROM or web download.

-Click on the Custom Install menu and choose "Uninstall".

-Click continue and then choose the option for "Clean" uninstall

-Click OK when it reports that the uninstall was successful and then reinstall Pro Tools.

New User Account

Try creating a new user with admin privileges in System Preferences > Accounts.

-Click the Lock to authenticate

-Enter password

-Click the "+" (plus sign) under the list of users

-Type In Pro Tools for the Name

-Enter a password and verify (optional)

-Check the box 'Allow user to administer this computer'

-Click 'Create Account'

Then login to this new account and run Pro Tools

-Go to the Apple menu and go to 'Log Out (Username)'

-Login in to Pro Tools

-If there is no icon in the Dock, navigate to MacHD>Applications>Digidesign>Pro Tools

-Double-click on 'Pro Tools LE'

Disable Virus Protection

Disable any Virus Protection Software. Check for evidence of anti-virussoftware in Library > Startup items. Move these items out of thisfolder and restart the CPU (do NOT remove the DigidesignLoader orPACESupport folders).

Other Troubleshooting

- Remove any unnecessary USB or Firewire devices or any other extraneous hardware and then restart the computer.

- In the case of possible hardware issues with your Mac the next stepis to test the Mac hardware. To do this, boot off of the Apple HardwareTest CD or Software Restore DVD (depending on Mac model) that shippedwith your Mac. If you are booting off of the Software Restore DVD, holdthe option key while the Mac starts up, then choose Apple HardwareTest. Once Apple Hardware Test loads, press Control+L. This puts AppleHardware Test in loop mode. Click Extended Test, and let it go(preferably overnight). When you return, the hardware test will havefound an error or it will be continuing. If it finds an error, it willgive you an error code and you will be able to see which iteration ofthe loop it found the error on. If it has not found an error, it willstill be looping and it will indicate which iteration of the loop iscurrently in progress. Either way, the only way to restart your Mac isto hold in the power key to shut it down first.

Login Items

Certain items that always start up when you log in to your computer canconflict with Pro Tools operations. The way to check what is startingup every time you log in is:

-Open System Prefs (Located in Apple Menu, Dock, or Applications Folder)

-Click "Accounts"

-Select your account from the list on the left hand side of the window

-Click on the "Login Items" tab on the right hand side of the window

-Go through the list and select each item and click the "-" (minus) button below the list to remove the item

StartupItems

Certain items that always start up when turn on your computer canconflict with Pro Tools operations. The way to check what is startingup every time you start your computer is to navigate to:

MacHD>Library>StartupItems

Any Digidesign Files/Folders located in this folder should be okay for use with Pro Tools. Common files/folders to have are:

Digidesign Loader

Digidesign Mbox 2

PACE Support

Any other files/folders should be backed up and removed for optimal Pro Tools operation

FileVault

FileVault is an Apple utility to help protect your files. WhenFileVault is turned "ON" this can conflict with Pro Tools and theinstallation of Pro Tools. Make sure FileVault is disabled followingthese steps:

-Open System Prefs (Located in Apple Menu, Dock, or Applications Folder)

-Click "Security"

-In this window look where is says:

FileVault protection is (on/off) for this account

-If FileVault is on click the button to turn Off FileVault

Spotlight Indexing

If Spotlight Indexing is running in the background, this can causeerrors in Pro Tools. To disable Spotlight Indexing follow these steps:

To turn this off there are two methods, both require administrator permissions:

Manually:

Start up the Terminal (/Applications/Utilities/)

Type the following:

cd /etc

sudo pico hostconfig (enter your password or press return)

An editor will open with the following entry: SPOTLIGHT=-YES- Replace 'YES' with 'NO'

press ctrl-x, then 'Y' and then press return Close the Terminal and restart the computer.

Use Spotless:

You may also use Spotless, but it is a paid option ($12.95). Download Spotless at Spotless information page. Installation and usage instructions are included with the download.

Only the indexing will be disabled, you will still be able to search with Spotlight after using this utility.

Dashboard

If you are having performance issues with Pro Tools, you might also try disabling the Dashboard for Tiger.

How to disable Dashboard

1) Open Terminal and type:

defaults write com.apple.dashboard mcx-disabled -boolean YES

2) once that is done you need to "restart" the dock by typing:

killall Dock

(make sure you get the capital 'D')

How to enable Dashboard

1)Open terminal and type:

defaults write com.apple.dashboard mcx-disabled -boolean NO

2)once that is done you need to "restart" the dock by typing:

killall Dock

(make sure you get the capital 'D')

Plug-In Compatibility

Make sure that all Plug-Ins are compatible with Pro Tools 7.

-Open MacHD>Library>Application Support>Digidesign>Plug-Ins

-Check the Version of each Plug-In

You can check the version by selecting the plug-in and go to the "File"menu and choose 'Get Info', or hit Command-I and look at the versioninformation under the 'General' Section

-Digidesign Plug-Ins should be version 7.0 (Dynamics III version 6.9) or higher

-Check with third party plug-in manufacturer's to confirm that theplug-in version you have is compatible with Pro Tools 7, and here:

Pro Tools 7 Plug-In Compatibility

Linked from the Pro Tools Plug-Ins section of the website:

Pro Tools Plug-Ins

General NeoAccess or Assertion errors

If you receive one of these errors, usually after recording or whentrying to save a session, it can usually be resolved by following thesteps for trashing preference and database files listed above.

How to Remove Expired (Demo) Plug-Ins & Software:

- See Answerbase 19348

If you're experiencing noise from the Mbox 2, there are several things to try:

- Make sure you're using balanced cables on all balanced inputs andoutputs. Remember that the Mbox 2 outputs are unbalanced, and shoulduse unbalanced connections.

- Try using a power conditioner for all your electronic gear (Furman, Monster).

- Run your outputs directly to shielded monitors and not through a mixer to see if the noise goes away.

- Some noise complaints arise from noisey power supply units insidecomputers - try installing and running on another computer (or ifyou're using a laptop, unplug the power supply and run off of batteriesto see if noise goes away).

- Unplug and turn off all unnecessary gear and run only the computer, the Mbox 2, and monitors.

- Listen through headphones.

- Make sure you're not running through a USB hub or other device - theMbox 2 always needs to be plugged directly into the computer.

- Try all USB ports on the computer to see if the noise lessens.

- Trace your cable path for long cable runs, and isolate your cablesfrom possible interference (other gear, florescent lighting, CRTmonitors, etc).

Lastly, if you feel a repair is necessary, please contact Digidesign Tech Support to set up a Return Authorization:

Digidesign | About Us | Contact Digidesign | USA & Canada Contact Information

Digi 002/002R users experiencing any combination of the followingsymptoms and whose serial number ends in the letter A-F should contactDigidesign Tech Support by phone for information on getting your unitserviced:

The unit keeps on clicking when powering up, or when launching Pro Tools.

"Unable to locate Digidesign Hardware" error message when launching Pro Tools.

Dots or blank scribble strips (002 only)

Firewire light in back is blinking or turned off completely

Sample Rate light is off or blinking

Unit won't power up, or only powers up intermittently

Only the Mute light is lit when powering up and you are unable to disable Mute __________________

ms georgia hilton mpse cas


CONVERSION of Audio for PAL / NTSC / FILM

Posted on February 28, 2010 at 12:54 AM

Some notes on CONVERSION of Audio for PAL / NTSC / FILM Ifthe PAL version was a frame-to-frame transfer, and is thus runningfaster than the original film (25fps vs 24, or even PAL video at 25 andHD video at 23.98), the key is to first get your stems back up to filmspeed, which would involve an SRC using 47952 as the source samplerate. THEN, when it's back at film speed (48k), you could do theNTSC>PAL SRC, the 4.1% thing. Maybe? Try just the center channelusing the fastest setting as a test, and if it works you can do thewhole thing at "Best" quality. Of course, the pitch will be faster(about 1/4 step) as well, in sync with the faster speed of the film. Toget it in sync but at the original pitch you have to do a wholetime-compression/pitch-shift thing which is a whole different process.

25 to 23.976, Divide those two numbers, and you get1.0427093760427093760427093760427. When truncated and expressed aspercentage, it becomes 104.27094% which is the number that you feedtime conversion software such as, Prosoniq Timefactory or Nuendotimestretch

NTSC to PAL would be 23.976/25 = 0.95904 = 95.904%

NTSC to PAL SR = 48000(25/23.976) = 50050 rounded.

PAL to NTSC SR = 48000(23.976/25) = 46034 rounded.

A time-stretch version is recommended, vs SRC conversion.

MPEX2 in Nuendo is apparently a good tool.

cheers

geo __________________

ms georgia hilton mpse cas


X-Curve History by Tomlinson Holman A History of the X Curve

Posted on February 28, 2010 at 12:54 AM

X-Curve History by Tomlinson Holman A History of the X Curve

The X curve celebrates nearly a quarter century of helping interchange in the industry.

In the history of multichannel sound, the standardization of theelectroacoustic frequency response for monitoring film stands as one ofthe most significant developments. It was standardizing the monitorfrequency response at the ear of listeners that provided for betterinterchangeability of program material, from studio-to-studio,studio-to-theater, and film-to-film. Work started on formalstandardization of the monitor frequency response for large rooms forfilm in 1975 on both the national and international levels. The workresulted in the standards ANSI-SMPTE 202 in the U.S., the first editionof which was officially published in 1984, and ISO 2969 on theinternational level. Actually, the standardized response was in use forsome years before the formal standards were adopted.

The X Curve: The measured electro-acoustic frequency response presentedto the ears of listeners in a dubbing stage or motion picture theater.The curve is to be measured under specific conditions, and is to beadjusted for room volume as specified in the standards referenced inthe text.

The background behind this work began with Texas acousticians C. P. andC. R. Boner, who established in the 1960s that a "house curve" was aneeded concept. They showed that a flat electroacoustic frequencyresponse in a large room sounds too bright on well-balanced programmaterial. This was subsequently found to be correct by otherresearchers, such as Robert Schulein and Henrik Staffeldt, as well.While Boner's practice was for speech reinforcement systems that didnot require theater-to-theater uniformity in the same way that filmdoes, nonetheless the concept of a house curve traces back to them.This development paralleled the introduction of 1/3 octave roomequalization, since there would be little point in establishing a housecurve if sound systems could not be adjusted to it.

Ioan Allen of Dolby Laboratories realized that the idea of a housecurve was a valuable one after applying Dolby A-type noise reduction tooptical soundtracks and extending the bandwidth of the track. While wethink of Dolby A as principally noise reduction of between 10 and 15 dBdepending on frequency when used in a tape context, in the case of theapplication to optical soundtracks, most of the advantage in dynamicrange was taken to extend the bandwidth. The ordinary mono Academy-typesoundtrack had sufficiently low noise for its time only by impositionof a strong high-frequency roll-off that made the effective bandwidthof soundtrack reproduced in theaters about 4 kHz. If such a track wasreproduced with a wide-range monitor, the noise was excessive. Byextending the high-frequency bandwidth of the monitor, and applyingDolby A NR to tame the noise, a very useful extension of bandwidth fromabout 4 to 12 kHz was achieved, while lowering the noise a fairly smallamount.

Then came the question of the best frequency response for the monitor.In an English dubbing stage, Allen did an experiment with a nearfield,flat hi-fi loudspeaker vs. the farfield film monitor loudspeaker, aVitaVox. He adjusted the frequency response by equalizing the filmmonitor until the balance was similar, although the monitorloudspeakers of the day only extended to about 8 kHz before giving upthe ghost. The electroacoustic response curve Allen found measured witha microphone was flat to 2 kHz, then down 1 dB per one-third octave, to-6 dB at 8 kHz, and falling beyond. This was named the X curve, foreXtended response, whereas the older Academy curve got dubbed the Ncurve, for Normal response (although one wouldn't consider it normaltoday).

When extended-range compression drivers and constant directivity hornsbecame available around 1980, the question became, "How should the Xcurve be applied to this new development?" The new systems had a fulloctave of high-frequency bandwidth over older systems, but deliverednearly the same output response across a range of angles, rather thanconcentrating the response on axis as frequency went up as the olderdriver-horn combinations did.

One theory floated in the middle '70s was that the need for a housecurve was based on an artifact of the method of measurement rather thana real need for sound to be rolled-off at high frequencies in largespaces. This was because the quasi-steady-state pink noise stimulusmeasured by a real-time analyzer in a room is time blind, lumping thedirect sound, reflections, and reverberation togetherindistinguishably. If the different soundfields had differentresponses, the pink noise stimulus plus RTA could not sort out thedifferences and would basically average all the responses. Since themicrophone is in the farfield of the loudspeaker where reverberation isdominant, then the response with a collapsing directivity horn vs.frequency could be expected to be rolled-off at high frequencies, sincethe contribution of all the off-axis angles would dominate over thedirect sound. Nevertheless, in this condition, the direct sound couldbe flat, and we might respond to the flat direct sound and ignore thelater-arriving response as listeners.

If we then were to change to a constant directivity horn, with itsoutput more constant over all angles within its coverage, and thesystem is tuned to a "house curve," then it might be expected to soundduller than the older horns, at least on axis at a distance. That'sbecause, under these conditions, both the direct sound and thereverberant sound would be rolled-off and on the same curve. So one ofthe first experiments I did on this combination was to playconventionally mixed program material over constant directivity hornsequalized to the X curve to see if the sound was too dull. It was not;in fact, with the bandwidth extension from 8 to 16 kHz, it actuallysounded somewhat brighter, but this was due to the extended compressiondriver response instead of having to do with the equalization curve.

So what's going on here? This was later explained by Dr. Brian C. J.Moore, author of numerous refereed journal articles on psychoacousticsand the book, An Introduction to the Psychology of Hearing. Therolled-off house curve has a good basis in psychoacoustics, because asoundfield originating at a distance is "expected" to be morerolled-off than one originating nearby. It is a little like opticalillusions in vision that show, despite occupying the same area on theretina, pictures look bigger on a larger screen, even when a smallscreen is closer and takes up the same horizontal and vertical angles.As it turns out, both spectrum and level are affected by the perceptionof the size of space you are in, and "getting it to match" perfectlyfrom large to small room in physical sound pressure level and responsedoes not result in sounding the same.

With the additional octave of high-frequency extended range of moremodern drivers and horns came the need to calibrate the X curve to thehighest audible frequencies. Later editions of the SMPTE and ISOstandards show the roll-off to 8 kHz as originally standardized, butadded rolloff from the extended curve in the bands above 8 kHz. Someusers don't employ this additional roll-off, staying on the original Xcurve to 16 kHz, but in an experiment I did at USC, I found thatfollowing the letter of the standard was an improvement inhigh-frequency balance and interchangeability of program material. Thiswas done in a very sensitive experiment, reported earlier in SurroundProfessional, that involved playing trailers in a large theater exactlyas they sounded in the dubbing stage, with the agreement from thepeople who had supervised their mixes that they sounded correct, andthis involved eight trailers mixed in a variety of studios. Both leveland response standards had to be perfect to accomplish this, and just a1 dB error over several octaves that crept in during setup was heard,and had to be corrected.

Another development of the X curve is how it should vary with roomvolume. Although a variation in the response with room volume waswritten into the original standard, further work shows that theresponse should be "hinged" at 2 kHz, and turned up at high frequenciesin smaller rooms. Curves that extend the range out to higherfrequencies before breaking away from flat do not seem to interchangeas well.

Today, the major factors affecting interchangeability no longer have todo with the target curve, since the X curve is very well accepted, butrather have to do with how the curve is to be measured and adjustedelectroacoustically. The standard calls for such needed items to makegood measurements of quasi steady-state noise as spatial averaging,temporal averaging, and the proper use of measurement microphones. Thelargest variations among different practitioners are in the use ofmicrophones. The problem is that the soundfield seen by a microphone ina large room is a mixture of direct sound, early reflections, andreverberation. Standard measurement 1/2-inch microphones demonstratevery different high-frequency response when measured anechoically onaxis and with a diffuse field. Differences are on the order of 6 dB inthe top octave between the two, and response in rooms is highlyaffected by the differences between these two. Only by the use ofsmall, low-diffraction microphones, such as 1/4-inch or smallerdiaphragm mics, are the differences kept small.

The best usage of measurement microphones today is to calibrate smallones for grazing incidence across the diaphragm rather thanperpendicular to the soundfield, because, this way, the microphone willdemonstrate the most similar response for the direct sound (across thediaphragm) and reverberation (a diffuse field). One of the primary waysin which problems show up in this area is in the difference exhibitedbetween sound originating from a more-or-less point sound screenchannel vs. a surround array: 1/2-inch microphones make serious errorsbetween these two because the soundfields generated under the twoconditions are so different.

The X curve now has nearly a quarter century of use and has absolutelyacted to help interchange in the industry. Combined with levelstandards, and de facto industry standards such as speakerdirectivities, the whole film industry has benefited without a doubt.Problems linger in applying the standards uniformly due to differentmethods of measurement. Also, when heard over a modern flat loudspeakerin a small room, program material balanced on an X curve monitor soundsoverly bright. That's because the original experiment that set thecurve was made many years ago, without the frequency range availablefrom today's components. This is not too important because, so long aseveryone agrees to use the same curve, then the response sounds thesame to the mixer on the dubbing stage as to the audience member in anyauditorium. Interchangeability of X curve material with home video canbe handled with a simple re-equalization. The ATSC television standardrecognizes the differences, sending a flag that tells receivingequipment whether the program material was balanced on an X curvemonitor, or on a flat monitor in a small room, and home equipment cantake appropriate action to re-equalize the program accordingly.

Heres a PDF that goes into more detail...

cheers

geo


Frame Rate Info

Posted on February 28, 2010 at 12:53 AM

Frame rate is the rate at which videoplays back frames. Black and white video ran at a true 30 frames persecond (fps). When the color portion of the signal was added, videoengineers were forced—for various technical reasons related to thephysical circuits—to slow the rate down to 29.97 fps. This slightslowdown of video playback leads to distortions in the measurement ofvideo vs. real time. Video is measured in indivisible units calledframes. Real time is measured in hours, minutes, and seconds.Unfortunately, a second is not evenly divisible by 29.97 fps. Let'slook at the mathematical relationships involved here:

A frame rate of 29.97 fps is 99.9% as fast as 30 fps. In other words, it is 0.1% (or one-thousandth) slower:

29.97 fps / 30 fps = .999 (or 99.9%)

100 - .999 = 0.1% slower

Conversely, a frame rate of 30 fps is 0.1% (or one-thousandth) faster than 29.97:

30 fps / 29.97 fps = 1.001 (or 100.1%)

(The actual value is 1.001,001,001, ..., 001 repeating infinitely.1.001 is enough precision for our calculation, given that the nextsignificant digit is the one-millionths place. No video program is longenough that the stray millionths of a second per hour will add upenough to throw the frame count off again.)

One hour's worth of "true 30 fps" video contains exactly 108,000 frames:

(30 frames/sec) * (3600 sec/hour) = 108,000 frames

However, if you play back 108,000 frames at 29.97 fps, it will take longer than 1 hour to play:

(108,000 frames) / (29.97 frames/sec) = 3,603.6 seconds = 1 hour and 3.6 seconds

(Actual value is 3,603.603,603, ..., 603 repeating infinitely.

Again, 3,603.6 is sufficient for video timecode, given that

the next loss of precision is three one-thousandths of a second perhour. You would have to make a video over 11 hours long before you wereoff again by a single frame.)

This is notated in timecode as 01:00:03:18. Thus, after an hour it is108 frames too long. Once again, we see the relationship of 108 framesout of 108,000, or one-thousandth.

Now let's apply that discrepancy to 1 minute of video. One minute, or60 seconds, of 30 fps video contains 1800 frames. One-thousandth ofthat is 1.8. Therefore, by the end of 1 minute you are off by 1.8frames.

Remember, however, that frames are indivisible; you cannot adjust by afraction of a frame. You cannot adjust by 1.8 frames per minute, butyou can adjust by 18 full frames per 10 minutes.

Because 10 minutes is not evenly divisible by 18 frames, we usedrop-frame timecode and drop two frame numbers every minute; by theninth minute, you have dropped all 18 frame numbers. No frames need tobe dropped the tenth minute. That is how drop-frame timecode works.When you use drop-frame timecode, Premiere 5.x renumbers the first twoframes of every minute, except for every tenth minute.

NTSC and the drop-frame numbering system

There are three fundamentally important things to remember about NTSC and drop-frame timecode:

• NTSC video always runs at 29.97 frames/second.

• 29.97 video can be notated in either drop-frame or non-drop-frame format.

• Drop-frame timecode only drops numbers that refer to the frames, and not the actual frames.

We will examine the ramifications of these rules below.

NTSC video always runs at 29.97 frames/second

Unlike "true 30 fps" video, an hour's worth of NTSC video does not have108,000 frames in it. It has 99.9% as many frames, or 107,892 frames,as described earlier. Again, at the rate of 1.8 less per minute, anhour of NTSC video has 108 frames less than an hour of "true 30 fps"video:

108,000 * 99.9% = 107,892 frames in an hour of NTSC video

108,000 - 107,892 = 108 frames difference

If we were to sequentially number each of these frames using the SMPTETimecode format, the last frame of the video would be numbered00:59:26:12:

108 frames = 00:00:03:18 in timecode format

01:00:00:00 - 00:00:03:18 = 00:59:26:12

That is 3 seconds and 18 frames shorter than an hour-long video.Drop-frame timecode is a SMPTE standard that maintains time accuracy byeliminating the fractional difference between the 29.97 fps frame rateand the 30 fps

numbering.

When you use drop-frame timecode, Premiere 5.x adjusts the framenumbering so that an hour-long video has its last frame labeled01:00:00:00.

Timecode measures time in Hours:Minutes:Seconds:Fractions-of-secondscalled frames. However, in NTSC video, a frame is not an even fractionof a second! Thus, NTSC timecode is always subtly off from real time—byexactly 1.8 frames per minute. Drop-frame timecode numbering attemptsto adjust for this discrepancy by dropping two numbers in the numberingsequence, once every minute except for every tenth minute (see thepreceding section, Mathematics of 29.97 video, for details).The numbersthat are dropped are frames 00 and 01 of each minute; thus, drop-framenumbering across the minute boundary looks like this:

..., 00:00:59:27, 00:00:59:28, 00:00:59:29, 00:01:00:02, 00:01:00:03, ...

Note, however, that you are off by only 1.8 frames per minute. If youadjust by two full frames every minute, you are still off by a little.Let's go through a sequence of minutes, to see how far off we are eachminute, and where each adjustment leaves us: Thus, 00:10:00:00 indrop-frame is the same as 00:10:00:00 in real time! Also, 10 minutes ofNTSC video contains an exact number of frames (17,982 frames), so everytenth minute ends on an exact frame boundary. This is how we can getexactly 1 hour of video to read as exactly 1 hour of timecode.

29.97 Video can be notated in either drop-frame or non-drop-frame format

You can notate 29.97 video using drop-frame or non-drop-frame format.The difference between the two is that with drop-frame format the frameaddress is periodically adjusted (once every minute) so that it exactlymatches real time

(at the 10 minute mark), while with non-drop-frame format the frameaddress is never adjusted and gets progressively further away from realtime.

Minute Start Position Frames Lost Drop Frame Adjusted Position

01 1.8 lost this minute drop 2 to correct 0.2 ahead

02 0.2 ahead 1.8 lost this minute drop 2 to correct 0.4 ahead

03 0.4 ahead 1.8 lost this minute drop 2 to correct 0.6 ahead

04 0.6 ahead 1.8 lost this minute drop 2 to correct 0.8 ahead

05 0.8 ahead 1.8 lost this minute drop 2 to correct 1.0 ahead

06 1.0 ahead 1.8 lost this minute drop 2 to correct 1.2 ahead

07 1.2 ahead 1.8 lost this minute drop 2 to correct 1.4 ahead

08 1.4 ahead 1.8 lost this minute drop 2 to correct 1.6 ahead

09 1.6 ahead 1.8 lost this minute drop 2 to correct 1.8 ahead

10 1.8 ahead 1.8 lost this minute drop 0

At the end of an hour-long video, the frame address for drop-frameformat will be 01:00:00:00, while the frame address for non-drop-frameformat will be 108 frames lower (remember, 108 frames out of 108,000,or 0.1%) at 00:59:56:12.

Conversely, at the point where the frame address for non-drop-frameformat reads 01:00:00:00, the frame address for drop-frame format wouldbe 01:00:03:18. Remember, this is longer than 1 hour of real time: 3.6seconds out of 3600, or 0.1%.

Either numbering system could have been used for this theoretical videoprogram. No matter which timecode format you use, the frame rate—29.97fps—would be the same, and the total duration of the program—in realtime—would be the same. The only difference is which address code getsstamped on what frame number.

Drop-frame timecode only drops numbers that refer to the frames, and not the actual

frames

This is nothing complicated; just remember to keep your terminologystraight. Much analog video equipment uses drop-frame SMPTE timecode.Just imagine if analog video were to drop the actual frames! First, itwould visually disturbing to literally drop two frames every minute.Second, and more importantly, analog video equipment is

governed by a certain amount of tape moving past the heads at a certainspeed. Even if the equipment didn't display two frames, there is no wayfor the physical mechanism to make up for the lost time. This is notthe same as with digital video, where a capture or playback device willdrop frames because it simply can't keep up with the amount of databeing streamed through it. Also, when we talk about being 1.8 framesahead or behind, we are referring to the frame numbering scheme being

ahead of real time. It does not refer to the video track being ahead orbehind the audio track; audio that drifts away from its video is adifferent issue,

In summary, "dropped frames" refers to a playback or capture issuerelated to data rates and hardware capabilities; drop-frame timecoderefers to a frame-numbering convention.

cheers

geo __________________

ms georgia hilton mpse cas


Dolby Facility Requirements

Posted on February 28, 2010 at 12:52 AM

Dolby requirements post.. too go to not put up here. Jacobfarron posted this and its a great post of some of the Dolby requirements:

Theatrical Sound Production Facility Requirements

1. Introduction

Dolby Production Services contracts services and encoding equipment to content owners and

distributors wishing to release their theatrical program in a Dolby format. To ensure the highest

quality and reliability, Dolby requires that these services take place in an audio production facility that

meets the minimum requirements outlined below.

Facilities wishing to be considered for Dolby approval should contact Dolby Production Services.

2. Room Design

2.1. The room must be large enough to accommodate at least “Mid Field” monitoring. The minimum

acceptable room dimensions are 20’ long (Screen to Rear Wall) by 13’ wide with a 9’ ceiling

height. The optimum mix position is located 2/3 the length of the room away from the screen. In

the minimum sized 20’x13’ room, this position is 13’-4” from the screen.

Refer to chart below for acceptable room dimensioning ratios. The shaded area represents

acceptable conditions, whereas the straight line represents the optimum ratio.

3. Speakers

3.1. The screen speakers (Left, Center, and Right) must be the same make and model and must be

behind a perforated projection screen. The screen speakers should be able to reproduce

frequencies +/-3dB from 40 Hz to 16 kHz without assistance (satellite systems utilizing a

subwoofer to achieve full range are not acceptable for use as the screen speakers). The screen

speakers must be able to produce “clean” sound pressure levels (peaking) up to 105dBC SPL.

The location of the Left and Right speakers should not subtend an angle greater than 45

degrees from the mix position. The speaker cabinets should also be mounted at the same

vertical height, which should be mid-screen, for all screen channels.

Rev 20080213 Page 1 of 3

3.2. There must be at least (2) pairs of surround speakers mounted along the sidewalls to create an

effective surround “array”. Larger mixing rooms will have several surround pairs that cover

listening areas in front of and behind the mix position. In smaller rooms, the first pair of

surrounds must be slightly in front of the mix position. The second surround pair should be

slightly behind the mix position.

Mix stages that are to be equipped for Dolby Digital Surround EX must also have at least (1)

pair of surround speakers mounted on the rear wall. A separate two-channel amplifier must also

power the rear surround speakers to allow proper Surround EX monitoring.

For smaller mix rooms, surround speakers should never be directly “on axis” with the mix

position. The surround speaker array must be able to produce “clean” sound pressure levels

(peaking) up to 105dB SPL.

3.3. There must be a separate subwoofer capable of producing an equalized response of 25Hz-

120Hz +/- 3dB. The subwoofer must also be able to produce “clean” sound pressure levels

(peaking) up to 115dBC SPL.

4. Equalization & Delay

4.1. The speaker system must be equalized to the ISO 2969 “X” curve. There must be 1/3 octave or

parametric equalization inserted before the screen channel amplification to accomplish this

equalization. For the surround channels, single octave EQ is acceptable but not recommended.

4.2. If the distance from the mixer to the screen is more than 1.5 times the distance from the mixer to

the surrounds, a suitable delay line should be inserted (Pre-EQ) into each surround channel

monitoring path. It is recommended that the delay line is patchable so that it can be inserted in

the recording chain should a separate picture and track screening master be required.

4.3. A parametric EQ of at least one but preferably more bands and a 120 Hz low pass filter (Pre-EQ)

should be inserted in the LFE (subwoofer) monitor path. The LFE filter should be a 3rd order

Butterworth filter set with a crossover point at 120 Hz. Higher order filters are acceptable, but

lower order filters can cause incorrect perception of the LFE channel. Also, it is recommended

that the 120 Hz low pass filter is patchable so that it can be inserted in the recording chain

should a separate picture and track screening master be required.

5. Level

5.1. After proper equalization, the monitor levels need to be calibrated to 85 dBC SPL for each

screen channel (L,C,R), 82 dBC for each surround channel, and +10 dB in-band gain (RTA

method) referenced from the center channel for the subwoofer. A compliance check of EQ and

levels by a Dolby engineer must be performed prior to commencement of each contracted mix.

5.2. The sound system must be designed to provide a minimum headroom specification of +20dB

above normal reference level for each channel.

5.3. The console monitor section must have a multi-channel assignable fader with at least six inputs

and outputs. The monitor section must also provide a ‘fixed reference level’ mode for proper

listening levels when mixing and print mastering.

Rev 20080213 Page 2 of 3

6. Equipment

6.1. Dolby will supply a Digital Mastering Unit (DMU) to approved 5.1 mixing studios IF the length of

the film is 40 minutes or more. For short subjects or trailers, the film must be mastered to a

digital multitrack format and transferred at an approved Dolby Digital transfer facility..

6.2. Studios that are approved to use the Dolby DMU mastering system must also meet certain

business requirements (films per year) to be considered for a permanent installation. For studios

not meeting these business requirements, Dolby supplies a traveling DMU on a “per-mix” basis.

6.3. The “Dolby Surround Tools” plug-in for ProTools can not be used to create an Lt/Rt during the

final film print master. This plug-in does not facilitate the proper metering and processing needed

during mastering. Although the plug-in cannot be used for print mastering, it can be used for pre-

mixing. Also, any analog tape machines being used for the mix should be equipped with Dolby

SR noise reduction

Note: Dolby Laboratories, Inc. Model CP650 is a recommended cinema processor for decoding many

formats such as: Dolby Digital Film Soundtrack, SR/A Optical Film Soundtrack, and Digital 5.1 and

Lt/Rt Studio Masters.

Dolby Multichannel Music Mixing pdf. I do not know if these numberscarry over into TV Post sound. However, in the appendix it seems thatDolby has simply copied these numbers from AES, EBU, and ITUrecommendations. These are also almost identical to THX recommendationsI have seen.

http://i37.photobucket.com/albums/e5...ron/Table1.jpg

http://i37.photobucket.com/albums/e5...ron/Table2.jpg

3.1.2 Acoustics

Early Reflections

Any early reflections (within 15 ms) should be at least 10 dB below the level of the

direct sound for all frequencies in the range 1 kHz to 8 kHz [6].

Reverberation Field

Reverberation time is frequency-dependent. The nominal value, Tm, is the average of

the measured reverberation times in the 1/3-octave bands from 200 Hz to 4 kHz and

should lie in the range: 0.2 < Tm < 0.4 s. Tm should increase with the size of the room;

the formula in Table 3-2 is a guide.

Reflective and Absorbent Surfaces

Large flat reflective surfaces should be avoided in the mixing environment.

Placement of doors, control room windows, and equipment should be considered with

speaker placement and aiming in mind. A combination of diffuse reflectors and

absorptive materials should be used to achieve a smooth RT decay time within the

specified range shown in Figure 3-1.

Again, it is recognized that these values may not be achievable in some installations,

but is recommended that the room be measured using a real-time analyzer and that

architectural solutions (wall treatments, bass traps, room reorientation, and so on) be

utilized first to achieve the recommended values. A mixture of diffuse reflective and

absorbent surfaces, applied evenly to the whole room, aids in creating an acceptable

reference listening condition [12].

Only after considerable effort has been made using architectural solutions to smooth

the room response should equalizers be introduced into the monitor chain. See

Section 4.2 for more information on room equalization.

Background Noise

The listening area should ideally achieve an NC rating of 10 or below with the

equipment off, measured at the reference position. A studio with equipment such as

video projectors, video monitors, and other ancillary equipment powered on should

achieve a rating of ? NC 15.

Any background noise should not be perceptibly impulsive, cyclical, or tonal in nature.

http://i37.photobucket.com/albums/e5...ron/Table3.jpg

NR 10 or NR 15 may be hard to realize in a practical manner in some installations, in

which case, every effort should be made to identify the loudest noise sources and

correct as appropriate. The most common noise sources and possible remedies include:

• HVAC systems: Increase the surface area of the supply air vent. Separate or float

all mechanical connections between high velocity or rumbling motors and ducts

and the listening room.

• Equipment: Contain computers and other equipment with loud fan noise in noise

attenuating, ventilated cabinets.

• Doors and windows: Make sure all the doors and windows are aligned properly

and form a seal when closed. Adding a second window or door, with air space

between it and the original, can reduce unwanted noise considerably.

Other sources of problem noise may need to be addressed. Every effort should be

made to approach the recommended values shown in Figure 3-2.

Once again, THESE ARE NOT REQUIREMENTS FOR APPROVAL. They are the onlyrecommendations I have found Dolby to make. Furthermore, they aregeneral guidelines based on AES, EBU, and ITU recommendations.

If someone knows that these figures are not applicable for Cinema/TV, etc please let me know.

JBL also lists acoustic considerations specifically for Cinema, based on Lucasfilm recommendations. http://jblpro.com/pub/cinema/cinedsgn.pdf __________________

ms georgia hilton mpse cas


Blu-ray disc UDF 2.6 specs and stuff

Posted on February 28, 2010 at 12:52 AM

Blu-ray disc UDF 2.6 specs and stuff here'ssome links and some blu-ray information. I needed to do my homeworksince I realized how little I actually knew about Blu-ray...

Blu-ray Disc is a next-generation, optical disc format that enables theultimate high-def entertainment experience. Blu-ray Disc provides thesekey features and advantages:

Maximum picture resolution. Blu-ray Disc delivers full 1080p* video resolution to provide pristine picture quality.

Largest capacity available anywhere (25 GB single layer/50 GB duallayer). Blu-ray Disc offers up to 5X the capacity of today’s DVDs.

Best audio possible. Blu-ray Disc provides as many as 7.1 channels ofnative, uncompressed surround sound for crystal-clear audioentertainment.

Enhanced interactivity. Enjoy such capabilities as seamless menunavigation, exciting, new bonus features, and network/Internetconnectivity.

Broadest industry support from brands you trust. More than 90% of majorHollywood studios, virtually all leading consumer electronicscompanies, four of the top computer brands, the world’s two largestmusic companies, PLAYSTATION® 3 and the leading gaming companies, allsupport Blu-ray Disc.

The largest selection of high-def playback devices.Blu-ray Disc issupported by many of the leading consumer electronics and computingmanufacturers. That means you can maximize the use of your HDTV andyour home entertainment system with the widest selection of high-defplayback devices—including players, recorders, computers, aftermarketdrives and the PLAYSTATION® 3 game console.

Backward compatibility**. Blu-ray Disc players enable you to continue to view and enjoy your existing DVD libraries.

Disc robustness. Breakthroughs in hard-coating technologies enableBlu-ray Disc to offer the strongest resistance to scratches andfingerprints.

Public Specifications

http://www.blu-raydisc.com/assets/do...0307-13404.pdf

http://www.blu-raydisc.com/assets/do...sual-12838.pdf

http://www.blu-raydisc.com/assets/do...0305-12955.pdf

http://www.blu-raydisc.com/assets/do...rmat-12834.pdf

http://www.blu-raydisc.com/assets/do...gies-12835.pdf

Dolby Authoring and Mastering Solutions for High-Definition Disc Media, Blu-ray DVD, HD DVD, and DTV

Blu-ray.com - Blu-ray Movies, Players, Recorders, Media and Software

codecs for Blu-ray

Linear PCM (LPCM) - up to 8 channels of uncompressed audio. (mandatory)

Dolby Digital (DD) - format used for DVDs, 5.1-channel surround sound. (mandatory)

Dolby Digital Plus (DD+) - extension of Dolby Digital, 7.1-channel surround sound. (optional)

Dolby TrueHD - lossless encoding of up to 8 channels of audio. (optional)

DTS Digital Surround - format used for DVDs, 5.1-channel surround sound. (mandatory)

DTS-HD High Resolution Audio - extension of DTS, 7.1-channel surround sound. (optional)

DTS-HD Master Audio - lossless encoding of up to 8 channels of audio. (optional)

Blu-ray Disc for Movie Distribution

Introduction

Most people know about Blu-ray Disc's basic features: It can store 25GB (single layer) or 50 GB (dual layer) on a single-sided disc - about5 to 10 times the capacity of DVD. As a result, Blu-ray Disc supportsthe highest quality HD video available in the industry (up to 1920 x1080 at 40 Mbit/sec). Large capacity means no compromise on videoquality. Furthermore, a Blu-ray Disc has the same familiar size andlook as DVD, allowing for compatibility with existing discs.

Compatibility across full family

Blu-ray Disc Rewritable (BD-RE) and related video specifications werefirst defined in 2003. The Blu-ray Disc ROM format for moviedistribution is fully based on this specification when it was definedin 2004. As a result, users can play home-recorded discs on all oftheir Blu-ray Disc equipment; there are no playback compatibilityissues as with rewritable DVD formats. The Video Distribution formatwas widely expanded to offer content producers a full range ofadditional features unavailable in the home recording format.

Video highlights

The BD-ROM format for movie distribution supports three highly advancedvideo codecs, including MPEG-2, so an author can choose the mostsuitable one for a particular application. All codecs are industrystandards, meaning easy integration with existing authoring tools, andchoice from wide range of encoding solutions. All consumer videoresolutions are available:

- 1920 x 1080 HD (50i, 60i and 24p)

- 1280 x 720 HD (50p, 60p and 24p)

- 720 x 576/480 SD (50i or 60i)

Audio highlights

The BD-ROM format for movie distribution supports various advancedaudio codecs, so an author can choose the most suitable for aparticular application. The high capacity and data rate of Blu-ray Discallow for extreme high quality audio in up to 8 channels to accompanyHigh Definition video. Final audio specifications include DTS (coreformat), Dolby Digital AC-3 and LPCM (up to 96/24) . Optionally, theformat might support DTS++ and LPCM 192/24 7.1.

Exceed DVD feature set

The Blu-ray Disc movie distribution format was designed to offer all ofthe features and the familiar user interface model of DVD-Video.However, content producers have a wide array of new and extendedfeatures to be included in a Blu-ray Disc title. For this, two profilesare available:

"HDMV" mode

Offers all features of DVD-Video and more. The authoring process is in line with DVD-Video creation.

"BD-J" mode

Offers unparalleled flexibility and features, because it is based onthe Java runtime environment. It allows for extensive interactiveapplications, and offers Internet connectivity.

"HDMV" mode

Introduction

"HDMV" mode was designed to offer exciting new features, while keepingthe authoring process as simple as possible. It streamlines theproduction of both Blu-ray Disc as well as DVD-Video titles, as theproduction process incorporates many identical phases. It offersimproved navigational and menu features, improved graphics andanimation, improved subtitling support and new features like browsableslideshows.

"Out-of-mux" reading

Unlike DVD-Video, the Blu-ray Disc format allows for data to be readfrom a different location on the disc, while uninterruptedly decodingand playing back video. This allows the system to call up menus,overlay graphics, pictures, button sounds, etc. at user request withoutstopping playback. Some examples of possibilities will be explainedlater.

Graphic planes

Two individual, full HD resolution (1920x1080) graphics planes areavailable, on top of the HD video plane. One plane is assigned tovideo-related, frame accurate graphics (like subtitles), and the otherplane is assigned to interactive graphical elements, such as buttons ormenus. For both planes, various wipes, fades and scroll effects areavailable, for example to present a menu.

Button graphics

Menu buttons can have three different states: Normal, Active andSelected. They support 256 color full-resolution graphics andanimation, thereby greatly surpassing the capabilities of DVD-Video.Buttons can be called and removed during video playback, there is noneed to return to a "menu screen".

Button sounds

Button sounds can be loaded into memory of the Blu-ray Disc player.When a user highlights or selects a menu option, the sound can beplayed (such as a voice-over explaining the highlighted menu choice, orbutton clicks). These button sounds can even be mixed with the runningaudio from the movie or menu.

Multi-page menus

In DVD-Video, playback was interrupted each time a new menu screen iscalled. Due to Blu-ray Disc's ability to read data from the discwithout interrupting the current audio/video stream, a menu can consistof several pages. Users will be able to browse through the menu pagesor select different menu paths, while the audio and video remainplaying in the background.

User-browsable slideshows

In DVD-Video, user browsable slideshows were not possible withuninterrupted audio. As a result of Blu-ray Disc's ability to read datafrom the disc without interrupting the current audio/video stream,users can browse through various still pictures while the audio remainsplaying. This applies not only to forward and backward selecting: Auser can make different selections on what picture to view (or selectfrom a screen presented with thumbnail images) while the audio remainsplaying.

Subtitles

In DVD-Video, subtitles were stored in the audio/video stream, andtherefore they had limitations on the number of languages and displaystyles. Again, it is due to Blu-ray Disc's ability to read data fromthe disc without interrupting the current audio/video stream, thatsubtitles can be stored independently on the disc. A user may selectdifferent font styles, sizes and colors for the subtitles, or locationon screen, depending on the disc's offerings. Subtitles can beanimated, scrolled or faded in and out.

"BD-J" mode

Introduction

"BD-J" mode was designed to offer the content provider almost unlimitedfunctionality when creating interactive titles. It is based on Java 2Micro Edition, so programmers will quickly be familiar with theprogramming environment for BD-J. Every Blu-ray Disc player will beequipped with a Java interpreter, so that it is capable of runningdiscs authored in BD-J mode.

Graphical User Interface

In BD-J mode, the author has complete freedom in designing the userinterface. The interface is controllable by using standard navigationalbuttons on the remote. It can display up to 32-bit dynamicallygenerated graphics (millions of colors), and it supports the display ofpictures in standard file formats like JPEG, PNG, etc.

Playback control

The BD-J application can act as the sole interface to the disc'scontents (thus replacing the player's on-screen controls as with discsauthored in HDMV mode). The BD-J environment offers all of the playbackfeatures of HDMV mode, including the selection of subtitle, trick playmodes, angles, etc. Video can even be scaled dynamically, so that itcan be played in a small size in the corner of a menu, and resume fullscreen when a selection is made.

Storage

A Blu-ray Disc player might contain a small amount of non-volatilesystem storage (flash memory). This system storage can be used to storegame scores, bookmarks, favorites from a disc, training course results,etc. As a manufacturer's option, a Blu-ray Disc player may also beequipped with Local Storage (hard disk, to allow large amounts of datalike audio/video to be stored).

Internet connection

The BD-J system supports basic Internet protocols like TCP/IP and HTTP.The player may connect to the disc publisher's web site to unlockcertain content on the disc (after certain conditions, like payment,are met), or dynamically display certain info (like theater playingschedules for a movie) on the screen. The disc's program may beextended with JPEG pictures or audio fragments downloaded from theInternet, or it can even stream full new audio/visual content to LocalStorage.

Conclusion

The Blu-ray Disc format for Movie Distribution offers two flexibleprofiles for the creation of titles. It was designed to allow for thestreamlined development of Blu-ray Disc (HD) and DVD-Video (SD) titlesat the same time, if needed. Basic menus and navigation can beidentical. However, it also offers many new functions that will benefitboth the author (by offering flexible ways of creating disc content),as well as end users (by offering exciting new functionality comparedto DVD-Video)

Blu-ray Disc for Video

What is the quality of Blu-ray Disc video?

Blu-ray Disc offers HDTV video quality that far surpasses any othermedium or broadcast format available today. With High Definition videowith a resolution of up to 1920x1080 and up to a 54 Mbit/sec bandwidth(roughly double that of a normal HDTV broadcast), no other format canmatch Blu-ray Disc's video quality. Furthermore, due to theoverwhelming capacity of a Blu-ray Disc, no tight compressionalgorithms that may alter the picture quality are required, as withother formats that offer less recording space. Depending on theapplication, Blu-ray Disc also supports other video formats, includingstandard definition TV.

How much video will fit on a Blu-ray Disc?

As with DVD, this depends on the decisions on the usage of videobandwidth, the number of audio tracks and other criteria made by theauthor of the disc. Furthermore, the choice of the used codec alsoinfluences playback time. On average, a single-layer disc can hold aHigh Definition feature of 135 minutes using MPEG-2, with additionalroom for 2 hours of bonus material in standard definition quality. Adouble-layer disc even extends these numbers up to 3 hours in HDquality and 9 hours of SD bonus material. Using any of the advancedcodecs, these numbers can even be significantly increased.

Do I need a new (HD) TV to use Blu-ray Disc?

No. Pre-recorded Blu-ray Disc titles will play on any standarddefinition TV set, even if the video was encoded in High Definition.Likewise, a Blu-ray Disc recorder can also record standard definitionvideo, for example from regular TV broadcasts or camcorders. A Blu-rayDisc can store around 10 hours of broadcast quality standard definitionvideo on a single-layer disc, or around 20 hours on a dual-layer disc.

How does Blu-ray Disc region coding work?

Contrary to DVD, the Blu-ray Disc region coding system divides theworld into only 3 regions, called regions A, B and C. The usage ofregion coding on a Blu-ray Disc movie title is a publisher's option. ABlu-ray Disc player will play any movie title that does not have regioncoding applied, plus all titles of its corresponding region.

Region A:

- North America

- Central America

- South America

- Korea

- Japan

- South East Asia

Region B:

- Europe

- Middle East

- Africa

- Australia

- New Zealand

Region C:

- Russia

- India

- China

- Rest of World

Blu-ray.com - Blu-ray Recorders

Blu-ray.com - Blu-ray Drives

Blu-ray.com - Blu-ray Media

Blu-ray Disc

geeze, Now I feel even dumber... I dug thru all this over the weekendand i'm going to have to do more studying and research... I've got aBlu-ray project coming in a month....

cheers

geo __________________

ms georgia hilton mpse cas


HD Technical Requirements for High Definition Programming

Posted on February 28, 2010 at 12:51 AM

HD Technical Requirements for High Definition Programming

2. AUDIO SPECIFICATIONS

Audio program material shall be produced using current industrystandards and accepted norms. The audio portion of the master andsource audio and videotapes must be produced so that no noise, static,dropouts or extraneous distortion is recorded in the audio.

Program audio must reflect reference tone level. Audio levels must be consistent throughout the program.

2.1 Stereo (LPCM) Programs

2.1.1 Phasing

Stereo audio must be fully mono compatible, i.e. the audio channelsmust be in the proper phase. NOTE: Full Mono compatibility means thatwhen the left and right stereo channels are actively combined to monothere is no discernible change in audio level or fidelity.

Full mix and M & E audio tracks should be phase coherent(synchronized) and level matched to prevent difficulty editing betweenthese tracks, as necessary.

2.1.2 Sound to Video Synchronization (Lip-synchronization)

The relative timing of sound to video should not exhibit anyperceptible error. Sound should not lead or lag the vision by more than10ms. This synchronization must be achieved at the last point at whichthe program supplier, or their facility provider, has control of thesignal.

2.1.3 Headroom

Transmission limiters clip at +8 dB. For broadcast stereo tracks,transient audio peaks must not exceed +8 dB above reference tone whenmeasured on an audio meter using the "True-peak" ballistic set (0 msrise, 200 ms fall). For 5.1 surround mixes, audio peaks may rise ashigh as +17 dBm (-3 dBfs). When mastering to a digital format and/orusing an Absolute Scale or Peak meter, where "0" is at the top of thescale and reference tone is at -20 dBfs, broadcast stereo tracks shouldpeak at no more than -12 dBfs.

2.1.4 Audio compression:

Program audio should have good dynamic range, within the parameterslisted above, but not be overly dynamic. While some compression may beneeded to control the dynamic range of the program audio, excessiveaudio compression of the final mix should be avoided as this reducesthe perception of audio quality by the listener.

2.2 Surround Programs

2.2.1 Formats

5.1, 5.0 or LCRS mixes are permitted. Surround English Fullmix(regardless of configuration (5.1, 5.0, etc) shall be expressed asDolby 'E' on ch3/4 of HDCAM master.

2.2.2 Documentation

An Audio Program Data Sheet shall be delivered with the master tape. (See accompanying example)

2.2.3 20bit Dolby E (6 channel)

Valid metadata in the Dolby 'E' stream for all contribution/transmission parameters is mandatory.

Timecode shall be present in the bit stream, reflecting picture master.

The Dolby E stream shall be formatted such that the program is in syncfollowing Dolby 'E' decoding using a DP572 or equivalent.

Note: One frame of audio delay is incurred for both Dolby E encodingand decoding. Program audio that is advanced two frames relative topicture prior to Dolby E encoding will therefore be advanced one frameas it is recorded to the HDCAM master. Following normal playback, theDolby E decode cycle will delay one additional frame, bringing theprogram back into sync.

Maximum permissible audio peaks in a 5.1 or 5.0 soundtrack shall be -3dBFS (+17dBm)

Although the Max dynamic range (max. peaks) for 5.1 channel mixes isconsiderably higher than for Stereo-only LPCM mixes, it is understoodthat many 5.1 mixes will have a dynamics structure which more closelyresembles a -10dBFS stereo mix in order to facilitate the simplecreation of an Lt/Rt fold-down mix.

Regardless of the gain structure of a 5.x channel surround mix, it iscrucial that the supplied DIALNORM value accurately reflects the Leq(A) of program dialogue.

2.2.4 Stereo English Full-mix (LPCM, conventional stereo digital)

This shall be recorded on channels 1 and 2 of the HDCAM master tape andmay be used for screening and/or Standard Definition Transmission.

This mix shall be derived from the 5.x channel surround mix. i.e. "Fold-down" of the 5.1 or 5.0 mix to LCRS or Stereo (L/R).

This stereo mix should be expressed as Dolby Surround (Lt/Rt) wheneverpossible, or Lo/Ro if Dolby Surround encoding is not available. Tapelabeling and slate information shall reflect the nature of channels 1and 2 (either Lt/Rt or Lo/Ro). In either case (Dolby Stereo or not),the LPCM stereo Full-mix shall obey the conventional specifications foraudio delivery (e.g. Max peaks to 8dB over ref.).

2.3 Channel Allocations

All HDCAM masters should have the following audio channel allocations:

(A) SURROUND PROGRAM

Channel 1 - Program left (Lt or Lo)

Channel 2 - Program right (Rt or Ro)

Channel 3 - Dolby E

Channel 4 - Dolby E

Address Track - SMPTE drop frame time code

(B) STEREO PROGRAM

Channel 1 - Program left (Lt or Lo)

Channel 2 - Program right (Rt or Ro)

Channel 3 - M&E left

Channel 4 - M&E right

*If the stereo program is Dolby Surround encoded (Lt/Rt), then any stereo M&E mix

(where applicable) shall also be expressed as Lt/Rt.

Address Track - SMPTE drop frame time code

Dolby E Mastering Information

Date Program Start Time

Program Title Episode# or Sub Title

Producer Director

Post Sound Facility Mix Engineer

Dolby E Formatting

Sampling Frequency ? 48 kHz (mandatory)

Bit Resolution ? 16-bit ? 20-bit ? 24-bit

Time Code Format ? 23.976 ? 25/50 ? 29.97/59.94 DF

Tape Format ? HDCAM

Program Configuration ? 5.1 + 2 ? 5.1 ? 4

Sync (frame offset) ? -1 ? 0 ? +1

Audio Service Configuration Bitstream Information

Audio Coding Mode ? 3/2 ? 3/1 Audio Production Information ? YES ? NO

Bitstream Mode ? Complete Main ? Main M&E Original Bitstream ? YES ? NO

LFE Filter ? Enabled ? Disabled Copyright ? YES ? NO

Mix Room Type ? Large ? Small

Mix Level

Processing Extended Bitstream Information

Dialog Normalization Preferred Stereo Downmix Mode ? Not Indicated

RF Overmod Protection ? Enabled ? Disabled ? Lt/Rt Preferred

Digital De-emphasis ? Enabled ? Disabled ? Lo/Ro Preferred

DC Filter ? Enabled ? Disabled Lt/Rt Center Downmix Level

Bandwidth Lowpass ? Enabled ? Disabled Lt/Rt Surround Downmix Level

LFE Lowpass Filter ? Enabled ? Disabled Lo/Ro Center Downmix Level

Digital De-emphasis ? Enabled ? Disabled Lo/Ro Surround Downmix Level

Dynamic Range Control

Line Mode ? None ? Speech ? Film Std. ? Film Light ? Music Std. ? Music Light

RF Mode ? None ? Speech ? Film Std. ? Film Light ? Music Std. ? Music Light

Downmix Processing

Dolby Surround Mode ? Not Indicated ? Dolby Surround ? Not Dolby Surround

Center Downmix Level ? -3 dB ? -4.5 dB ? -6 dB

Surround Downmix Level ? -3 dB ? -6 dB ? -999 dB

Surround 3 dB Attenuation ? Enabled ? Disabled

90-Degree Phase-Shift ? Enabled ? Disabled

Track Format Notes

Channel 1 Front Left

Channel 2 Front Right

Channel 3 Center

Channel 4 LFE (where applicable)

Channel 5 Surround Left

Channel 6 Surround Right

Channel 7

Channel 8

MUSIC AND EFFECTS TRACKS

TECHNICAL FORMAT

2.4 Accompanying Audio Multi-track Format (if required)

Accepted format is DA-88.

2.4.1 TRACK ALLOCATION

8 Track Digital Audio (DA-98 or DA-88)

Track 1 - English Fullmix Left (Lt if available)

Track 2 - English Fullmix Right (Rt if available)

Track 3 - undipped BG/FX Left (Lt if available)

Track 4 - undipped BG/FX Right (Rt if available)

Track 5 - undipped Music Left (Lt if available)

Track 6 - undipped Music Right (Rt if available)

Track 7 - Narration/VO dialogue

Track 8 - On-camera/Actuality dialogue

29.97 SMPTE Time Code on the Time Code Track to be synchronous with picture master(s).

2.4.2 Mix reference

Reference on all Masters shall be -20dbFS (or equivalent) and peakprogram level shall be restricted to 8db above reference (or -12dbFS)

2.4.3 Timecode

On DA-88 Master, timecode shall match picture Masters (i.e. 01:00:00:00 program start, drop-frame)

2.4.4 Sample Rates

On DA-88 Master, sampling rate shall be 48kHz (16bit) and noise shaping(where applicable) shall not be used on Mix Stems (tracks 3 through 8).If noise shaping is employed on stereo full mix, this shall be noted ontape labels.

2.4.5 Audio Compression and Limiting

Mix Stems shall NOT be dynamically buss-limited (i.e. stems are notrestricted to the 12db over ref. peak limit). Stems summed at unitygain shall result in an unlimited version of the stereo full mix.

2.4.6 Reference Signals

Test tones for all Multi-track Masters shall be 1kHz tone @ -20dbFS.

cheers

geo __________________

ms georgia hilton mpse cas


From Matt at Digidesign. Concerning Lost Files.

Posted on February 28, 2010 at 12:51 AM

From Matt at Digidesign. Concerning Lost Files. Very cool! Ifeel your pain. I had a very similar problem recently. I backed up asession then put it in the Trash and Emptied and then realized I hadaudio files in that session folder that were from a different session.Since I had used Save Session Copy In to do the backup, those filesweren't copied. So they were gone. Or so I thought.

First and most important thing: Do NOT use the hard disk for anything!!Don't even launch Pro Tools with the thing connected. Pro Tools mayvery well update the .ddb file on that drive which could overwriteimportant data. Just leave it unconnected until you are ready toattempt recovery, which I will describe next.

You will need three programs (well, two actually, but a text editor makes life simpler):

1. Terminal (found in your Applications:Utilities folder)

2. HexEdit (freeware)

3. TextEdit (in your Apps folder) or, my favorite, TextEdit (by Haxial)

You will also need an extra hard disk or two, and a Pro Tools systemcapable of playing the same audio files you are trying to recover(i.e.: if you're trying to get back 96kHz files, you need a 002 or HDrig.) For simplicity's sake we're going to refer to your precious harddisk as the Source Disk and the extra hard disks as your DestinationDisk(s).

Here's what you're going to do

1. Read raw data off of the hard disk, 1GB at a time, to a second hard disk.

2. Attach audio file wrapper data to each raw data file. Do this THREEtimes if you are trying to recover 24-bit audio data. Do it TWICE ifyou're recovering 16-bit data. I'll explain.

3. Import the new audio files into Pro Tools.

4. Manually comb through the audio files in Pro Tools to find the datayou want. This part of the process may take quite a while and test yourpatience, but if the data is really important, you'll do it.

Some caveats (in no particular order)

1. This is going to take time. Quite a bit of time. If you need thisdone quickly, and you're getting paid, it might be better to send it toa data recovery company and pay big $$. That's up to you.

2. This is going to take up a lot of disk space. The bigger the harddisk you are trying to recover, the more space you'll eventually need.As a rule of thumb, figure on 4x the amount of disk space from theoriginal drive. If you don't have this much extra space, you can do itin chunks, but some of the time-saving techniques won't be as useful.

3. You need to know what type of audio files you're looking for. Fileformat, sample rate and bit depth. If you are trying to recover a wholebunch of different file types or you don't know the types, you're infor a REALLY LONG HAUL. You'll basically have to repeat this entireprocedure for each file type. If you're willing to do that you musthave some really important audio files to recover. Good luck.

4. Again (worth repeating), do NOT write to your affected disk driveuntil you have recovered your audio or given up. This is reallyimportant. Since you emptied the trash, the computer doesn't know wherethose audio files are and could write over them without warning if yousave something to that disk.

How to do it

1. Connect your Source and Destination Disks to your computer.

2. Boot up.

3. Launch Terminal.

4. In Terminal type su. Enter your password. You're now in superuser mode. Be careful.

Now you need to find out some info via unix:

5. In Terminal type df. This will show the mount point of your drives.On far right is the name of the disk; e.g. /Volumes/MyDiskName. On farleft is "unix device mount point", e.g. /dev/disk1s9. 1 is the disknumber (e.g. 1st disk found since booting). 9 is the partition number.

6. Find your disk in this list by the name and note down the mountpoint. This is how you will tell unix where to read raw data from.

Here goes the main recovery effort. I suggest you read ahead before you actually type this stuff.

7. In Terminal type dd if=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/01 count=2m

Okay, what's all this about?

dd This is a unix command that allows raw data reading/writing.

if= Tells dd command that this is the Input File.

/dev/rdisk1s9 This is your mount point that you found in step 5. Thenumbers will probably be different on your system. The r is added in totell dd that you want to do a raw disk read. The r is very important!

of= Tells dd command that this is the Output File.

/Volumes/MyDestinationDisk/01 Here's where you need to put in yourDestination Disk name. Keep the /Volumes/ part since this is the sameon all OS X systems and then type the name of the hard disk volume youwant to write to. Add a slash after the name and then type a file namefor the new raw data file you're going to create. I use numbers, like01 since this command will be executed many times. Each time I increasethis file number by 1 to keep things organized. (Unix scripters willsee opportunity for automation here but doing it manually gives thesame results.)

count=2m This tells dd how much data you want to read. 2m=1GB. (Unix deals in 512 Kilobyte chunks.)

8. If this all looks fine and dandy to you, press Enter. You won't seemuch going on but if you look at your drive access lights you'll seethat reads and writes are occuring.

Now you may be wondering how unix knows which 1GB of data to read offof your drive? Simple. It just reads the first 1GB of data. So how doyou get it to read the 2nd GB? or the 3rd? Or the 49th? Easy. Just addan additional command at the end of the dd line that says skip=2m. Thistells dd to start reading raw data 1GB from the beginning of the disk.You'd use this to create your second raw data file. Your third filewould need skip=4m added to it. The fourth will need skip=6m. Etc. Ahandy equation for this is of=N skip=(N*2-2)m. I.e.: Your Output Filenumber is N and the number in the skip part is N*2 - 2.

So your second raw data recovery will look like:

dd if=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/02 count=2m skip=2m

Your third raw data recovery will look like:

dd if=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/03 count=2m skip=4m

Your fourth raw data recovery will look like:

dd if=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/04 count=2m skip=6m

And so on. Until you've recovered all your data or run out of diskspace on Destination Disk. If you have to stop in the middle, make noteof where you left off and DISCONNECT your Source Disk.

How to speed up the raw data recovery process.

You may have noticed that each 1GB of data takes a long time torecover. I don't suggest you use your computer for any other tasksduring this process so you probably want to do this stuff late at nightor whenever the computer is not in use. But you don't want to be therebabysitting the thing all night long. There's a solution. You can typecommands into unix one after another if you seperate them with asemicolon. That's what I use TextEdit for. It's much easier to copy andpaste a whole bunch of those commands into TextEdit, then scrollthrough and change the Output File names (the numbers) and the skipcommands. Then you can copy and paste out of TextEdit back intoTerminal. You'll end up with something like this:

dd if=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/01 count=2m; ddif=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/02 count=2m skip=2m; ddif=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/03 count=2m skip=4m; ddif=/dev/rdisk1s9 of=/Volumes/MyDestinationDisk/04 count=2m skip=6m; etc.

I find that resizing the TextEdit window so that one dd command fitsperfectly on a line also helps in making the edits correctly.

Be careful! This is a powerful technique for getting things done in abig batch, but if you make a small typing error you can do disastrousthings to your hard disks. Or you can accidentally write each new rawdata file over the previous one, which means you won't really know whatyou recovered and you'll have to start over. If you know how to writeshell scripts in unix, I'm sure you'll be automating this wholeprocess. I found it more satisfying to be able to see each and everycommand written out before I let them loose on my system.

Also, don't forget that you need to do the superuser mode change eachtime you launch Terminal. And you should recheck the df command to makesure the mount points didn't change (they can every time the systemboots.)

You're one-third of the way there.

The next step is to add the audio file wrappers so you can import thesenew raw data files into Pro Tools, but first you need to create thewrapper files. There are two files to make a wrapper, one that comesfirst, then your raw data, then a second wrapper. I'll call thesewrappers header and footer. You'll use the same footer for each rawdata file, but you'll need to create three headers for each raw datafile. Why? Pro Tools records 24-bit data. 24-bit data is broken up intothree 8-bit chunks. Since you recovered raw data, you don't know whatthe right ordering of the three chunks. The only way to make sure youcan get all audio data back is to create three versions of each rawdata file, each version being a different ordering of the 8-bit chunks.So you'll need to create three header files, the second one being 1byte longer than the first, and the third being 2 bytes longer than thefirst. When you attach the raw data to each of these, the byte orderingwill start in each of the three possible places allowing you to findall the audio data that may be in the raw files. Sounds a littlecomplicated but it's really not.

Creating header and footer files

1. Disconnect your precious Source Disk!!!

2. Launch Pro Tools.

3. Create a session in the same file format, sample rate and bit depthas the data you're trying to recover. If you don't know, you'll have tocreate headers and footers for each file type and do everything fromhere on over and over until you find your data. If this is yoursituation, think long and hard about how important those files are.Unless you have original recordings of spacealiens I suggest you forgettrying to recover and re-record stuff. If you DID have originalrecordings of spacealiens and you didn't make a backup copy immediatelythen I suggest you sell your computer and get a job in a lessintellectually demanding field. That said, mistakes happen. I made sucha mistake. Luckily, I remembered the file type, sample rate, bit depthand even the number of channels of what I was looking for. So I pressedon.

4. Make a selection on your timeline that will create an audio file 1GB in size.

5. Press Option-Shift-3 to create a blank audio file with your selection.

6. You'll need to do this several times to zero in on the exact timeneeded to create a 1GB file. I'm not sure how exact you have to be butI got it to the exact file size through trial and error. Come to thinkof it, it's probably better to have a little bigger file than exactly1GB because the headers and footers will take a little space. Oh well.My recovery worked well. I got back over 99% of my lost data (about 4hours worth) and that was good enough for me.

7. Quit Pro Tools once you have your 1GB audio file. Name that file something like 1GBAudioFile.

8. Now launch HexEdit.

9. Open 1GBAudioFile in HexEdit. A sea of numbers will fill the window. Don't flinch.

10. HexEdit shows the raw hex data of the open file on the left sideand the "human readable" version on the right. Sometimes you can makeout words on the right side, but for audio data it just looks likegarbage. Scroll down through the data until you see a large amount ofzeroes that goes on forever. This is the audio data. Since you createda blank audio file, it's just zeroes in there, so it's easy to seewhere it begins and ends.

11. Find the beginning of the audio data.

12. Copy all the data from just before the audio data all the way to the beginning of the file.

13. Create a new file in HexEdit and copy this data into it.

14. Name this file "HeaderA".

15. In HeaderA file, copy and paste a single 00 to the end of the file.

16. Name this file "HeaderB".

17. In HeaderB file, copy and paste a single 00 to the end of the file.

18. Name this file "HeaderC". You now have three header files, each one byte longer than the previous one. Now for the footer.

19. Find the end of the audio data in the original 1GBAudioFile. It'sthe where the zeros all end and you get a few non-zero numbers showingup. There will be more zeros after this (where the waveform drawingdata is stored) so make sure you scroll in far enough to get the end ofactual audio data. This could take a while since you're looking at ALOT of data.

20. Copy all the data from the end of the audio data to the end of the file.

21. Create a new file and copy this data into it.

22. Name this file "Footer".

23. Quit HexEdit.

24. Eat something, take a break.

25. Copy the three Header and one Footer files into the same folder with your raw audio data files.

26. Launch Terminal.

27. Type cd and then drag the folder containing your raw audio filesinto the Terminal window. It will autofill the pathname for you. HitEnter. Now you are in the same directory (folder) as your raw audiofiles so commands will look for files in this directory.

28. Type cat HeaderA 01 Footer > 01A; cat HeaderB 01 Footer >01B; cat HeaderC 01 Footer > 01C This is three commands in a row(note the semicolons.) Each one says concatenate file HeaderA then file01 then file Footer in that order into a new file called 01A. Then dothe same for B, then C. So you now have three copies of your raw datafile with the audio file wrappers on them. They are now ready to beimported into Pro Tools.

29. Repeat step 28 for every raw data file you recovered. You can queueup a lot of these commands with semicolons just like you did with thedd command earlier, as long as you have the disk space. As you can nowsee, you'll have three more 1GB files on your drive for each 1GB rawdata file, making 4GB of data. This is where the 4x space requirementcomes from. If you're doing an 80GB drive, you'll eventually need 320GBof space to check every bit of space on the disk for audio (no punintended.) Breaking it up into smaller chunks, like doing 10GB of atime might be necessary if you don't have lots of free disk space. Andremember, don't use your original precious hard disk!! Don't even thinkabout writing to it until you are completely done recovering data fromit.

30. If you are trying to recover 16-bit data, then you only need tocreate two copies of each raw data file. There are only two bytes of 8bits each in this case.

The moment of truth

You have three 1GB files for each 1GB of space on your original disk.Now you need to put them in Pro Tools and start listening to them andlooking at them to identify audio data and save it.

1. Launch Pro Tools.

2. Create a new session at the same sample rate, bit depth, format as your recovered data.

3. Import files 01A, 01B and 01C to Audio Tracks.

4. Let the waveforms redraw. If they don't automatically redraw, select them in the Region Bin and force the redraw.

5. You are now looking at the the first 1GB of data from your drive,presented with three different byte orderings. You will probably seelarge regions of what looks like solid full code noise (looks like abrick when zoomed out). Interspersed with these bricks you'll hopefullysee what looks like regular audio.

6. Turn down your speakers!!! You're going to be hearing some unpleasant noises coming from your system. Do not use headphones.

7. Solo one of the tracks and start playing the parts that look likeactual audio data. If it sounds right, then go ahead and select anddelete the regions on the other two tracks that are in the same placeas the good audio data. You're deleting what is the same data as thegood stuff, just shifted one or two bits so what should be the bottom 8bits of your audio is now the top 8 bits or something like that. Goahead and listen to some of it. Depending on the original material,there may be some interesting stuff in there especially if you likenoise and distortion.

8. Repeat step 7 for each of the three tracks until you've gotten ridof all the stuff that you know is junk. You'll be left with mostly realaudio and probably some unknown sections that don't appear to haveaudio on them in any of the tracks. Listen to the unknown stuff anddetermine if it's of any value. Most likely it's junk.

9. Repeat it all with the each and every file you recovered and created.

10. You may have noticed while going through the audio that some stuffwill get cut off at the end of a file and then pick up at the beginningof the next one. This is helpful so you can edit stuff back together.You'll also notice that you've lost all meaningful timing relationshipsbetween tracks that were recorded at the same time, or edited togetherlater. You'll have to reassemble your session manually. This may soundnightmarish, but if this data is really that important to recover andyou've come this far, you can probably recreate the session even betterthan you did originally. Good luck.

The End

cheers

geo __________________

ms georgia hilton mpse cas



Rss_feed