Featured
Latest
Share:
Notifications
Clear all

Forum 1

Intercarrier Sound

63 Posts
10 Users
40 Reactions
13.3 K Views
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

In reading through the deliberations of the first NTSC, as recorded in “Television Standards and Practice” (1), it is fairly evident that at the time that the NTSC 525-line system parameters were established, the receiver intercarrier sound technique had not been developed. For example, NTSC (I) independently chose the sense of vision modulation (between positive or negative) and the type of sound modulation (between AM and FM.) As it happened it chose a combination, negative with FM, which was particularly suited for use with intercarrier sound. That was happenstance, but then had it not done so, possibly intercarrier sound would not have been developed.

So it is evident that intercarrier sound must have been developed sometime after the NTSC 525-line system had been in use. Yet most TV texts, although speaking very positively about intercarrier sound benefits in respect of economic receiver design, do not refer to its origins, and sometimes create the impression that intercarrier was used from the start. Thus some sleuthing work was required in an endeavour to find at least an outline history.

Kerkhof and Werner (2) turned out to be the starting point. Its bibliography – not footnoted in the text - includes two references, namely:

R.B. Dome: Carrier difference reception of T.V. sound. Electronics, Jan. 1947.

S.W. Seeley: Design factors for intercarrier television sound. Electronics, July 1948.

So it would seem that 1947 was the year in which intercarrier sound was first proposed, with development continuing into 1948.

Next, Wikipedia has a brief entry on Robert B. Dome at: http://en.wikipedia.org/wiki/Robert_B._Dome . This confirms his involvement with intercarrier sound and that he received a Liebmann award for his work.

Searching on “Dome” and “Intercarrier” found this: http://www.smcelectronics.com/.../... . It is a .pdf of a Sams Photo fact Electronic Reference Data, First Edition, July 1957.

The first chapter is “Television Inter-Carrier Sound reception”. It does provide some history, and notes that the initial exposition of the technique was made in 1947 in papers by L.W. Parker and R.B. Dome. It then goes on to refer to the intercarrier system as the “Parker system”. That seems a little odd given Dome’s involvement and award, but I guess that definitive comment on this would require reading of both papers, neither of which seems to be easily available.

Subsequently I have had access to “Television Engineering”, by Fink (3). This includes references to the previously mentioned Dome and Seeley papers, and also to the Parker paper, namely:

Parker, L.W., TV Intercarrier Sound System, Tele-Tech., 6:26 (October, 1947).

Fink makes the point that for intercarrier sound, the picture envelope should not be allowed to fall below 10 per cent of the peak amplitude, but that in 1951, there was no FCC regulation requiring transmitters to limit the downward peaks of modulation, despite their being at least 8 million intercarrier receivers then in operation. Originally, NTSC (I) was more concerned with ensuring that transmitters had an adequate modulation range, and set a white level of ≤15% as something of a stretch goal.

According to Wireless World, August 1952 (“International Television Standards”, pp.296,297), the ≤15% limit was then still in place, whereas the CCIR 625-line (Gerber) system had a 10% minimum. I imagine that the latter existed from the start. Presumably the Russian OIRT 625-line system, like the NTSC system developed essentially before the intercarrier sound technique arrived, had a minimum white level added, but it was not mentioned in that WW article. In WW, March 1959 (“European Television Stations”, p.109ff) there is a tabulation of (nearly) all world TV broadcast transmission systems. NTSC was still shown with white level as ≤15%, although there is a footnote to the effect that for the Japanese 525-line system, white level was 10 to 15%. NTSC video bandwidth was still shown as 4 MHz; I thought that it went from 4 to 4.2 MHz with the introduction of colour. OIRT was shown as having the same 10% minimum as for CCIR. Whilst omission of the OIRT system in the 1952 article might be reflective of the geopolitics of the period, omission of the Argentinean 625-line system (later System N) in both articles does seem a nit neglectful. It dates back to 1951, and I suspect was used for regular broadcasting ahead of the CCIR system.

Fink comments that intercarrier sound was more tolerant of local oscillator drift than was the split sound technique. With the latter, in early post-WWII US receiver practice, AFC from the sound channel was sometimes used, but the problem was that, particularly if AFC was used as AFT and in place of a manual fine tuning control, the oscillator drift range was wider than the sound channel bandwidth, so if on switching to a given channel the local oscillator was well away from the channel centre, the AFT would fail and possibly work in reverse. It seems that the combination of manual fine tuning and AFC to maintain tuning after setting, rather than to automate the tuning operation itself was a step too far in terms of elaboration and cost. Thus AFT/AFC in US practice died out for a while, returning again in the later 1950s I think, although this time taken from the vision IF.

The above and other references to intercarrier sound generally deal with its pros and cons, except that in respect of the latter, mention of the Nyquist slope effects are absent. This seemed not to become an issue until the advent of stereo and multichannel sound systems, when it was found desirable to move to the quasi-split sound technique (whose history is not so simple, either, but that can be a separate topic.)

To delve further into this topic would, I think, require access at least to the Dome and Parker papers. But if nothing else, we have a timeline for the advent of intercarrier technique, and confirmation of the inference that it was not considered by NTSC (I).

Cheers,

Steve

(1): “Television Standards and Practice”, Donald G. Fink, editor, National Television System Committee, 1943.

(2): “Television”, F. Kerkhof and W. Werner, Philips Technical Library, 1952

(3): Television Engineering”, Donald G. Fink, McGraw-Hill, Second Edition, 1952, LCC 51-12605

 
Posted : 06/01/2013 1:22 am
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

Since posting, I also thought to check in Fink’s Television Engineering Handbook (4), in respect of white level for the NTSC system.

On page 2-9, there is a tabulation of the principal monochrome systems. For NTSC, white level is stated as 15% +0%, -15%, so still essentially ≤15%. Video bandwidth is shown as 4 MHz.

But on page 2-14 is a table of the Luminance-modulation Standards for the American Compatible Color System.

Peak white level is shown as 12.5% ±2.5%, which means a 10% minimum. Video bandwidth is shown as 4.2 MHz.

So it looks as if the FCC solved the problem, as it were, by incorporating the 10% minimum white level when the color standards were issued, rather than updating the original standard.

Later in the book, in discussing intercarrier sound, Fink also mentions the FCC 10% minimum for peak white, which implies that it was viewed as the standard.

Cheers,

Steve P.

(4): “Television Engineering Handbook”, Donald G. Fink, Editor-in-Chief, McGraw-Hill, 1957, LCC 55-11564

 
Posted : 06/01/2013 2:04 am
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

As mentioned above, I have also delved a little into the origins of and reasons for quasi-split sound (QSS). Of the various references on hand, Benson & Whitaker (1) turned out to be the best starting point. In particular it pointed to a 1981 August IEEE paper, “Intercarrier Buzz Phenomena Analysis and Cures”, by Fockens and Eilers of Zenith (2). This was quite detailed. One conclusion was that to minimize buzz when audio subcarriers were used, it was desirable inter alia to eliminate the Nyquist-slope-caused IPM of the vision carrier, which thus outruled the conventional intercarrier technique. Various alternatives that avoided the problem were suggested, including quasi-split sound, also referred to as quasi-parallel sound. It was noted that there had been reports of dual-output SAW devices that could provide the required output.

This puts 1981 or perhaps a little earlier as the likely time in which the QSS was first developed and named as such. This timing is supported by the Plessey Consumer Integrated Circuit Handbook, March 1981 edition (3). This included advance information on a new SAW filter, the SW185 that had two output ports, one being suitable for QSS. The SW185 was shown paired with the developmental XL1441 IC, which appeared to be a dual-channel device handling both the vision IF and the QSS IF, and providing both video and intercarrier outputs. In that sense it followed the pattern of the SL1440, about which more later. The SW185 was also shown paired with the TDA2541 (for vision IF) and the TDA440 (for QSS IF). The TDA440 was itself a vision IF IC, but given that its normal deployment would include provision of the intercarrier as well as video baseband, its use for the QSS IF channel would seem to have been reasonable. A logical inference is that at the time, there were no industry-standard dedicated QSS IF ICs available. The TDA440 was on Plessey’s list of TV IF ICs, although the TDA2540/TDA2541 was its preferred type. The choice of TDA440, not TDA2541 for the QSS IF channel might have been because it was better in this role, or perhaps because unlike the TDA2541, it did not include the AFC function, which would have been redundant. Plessey used the term “parallel sound”, not QSS, in respect of the SW185. Perhaps the QSS term had not been coined in early 1981, but the Plessey had previously used “parallel sound” in connection with its SL1440 IC. Curiously the Plessey Television IC Handbook, April 1981 edition made no mention of the SW185 or the XL1441, although it did include the SL1440.

Plessey seemed to be a reasonable “place” to look because it seemed to have “majored” in SAW filters, which are a key part of QSS. But clearly Philips/Mullard was likely to have been an early mover with QSS circuits, as well. The Mullard Technical Handbook Book 4, Part 2, “Bipolar ICs for Video Equipment”, January 1983 edition confirms this. Both the TDA2545 (QSS IF) and TDA2546 (QSS IF with 5.5 MHz demodulation) were listed as development items, and in both cases the pages were dated May 1981. The term QSS was used, and one suspects that there is some chance that it was actually coined by Philips/Mullard. Possibly then the latter also introduced the idea of using the reference in phase quadrature for intercarrier generation, something that did require a dedicated IC.

Benson & Whitaker also referred to an August 1982 IEEE paper, “New Color TV Receiver with Composite SAW IF Separating the Sound and Picture Signals”, by Yamada and Uematsu of Hitachi (4) as an example of QSS. But this was around a year later than the European developments. The paper focussed on the two-output-port SAW filter; the sample circuit used an HA11440 vision IF IC and a µPC1366C QSS IF IC. I think that the latter might have been a vision IF IC pressed into QSS service, similar to one of Plessey’s approaches, which suggests that at the time, the Japanese industry was yet to produce a dedicated QSS IC.

Another Japanese example was the Sony Profeel VTX100ES component TV tuner, which covered Systems B/G/H and Zweiton stereo sound. The schematic from March 1982 shows the use of separate SAW filters for the vision IF and QSS IF channels. The vision IF used a TA7607AP IC, whereas the QSS used a TDA2840. The latter was a Siemens IC that appears to have been functionally similar to the Philips/Mullard TDA2545, although I have not been able to trace any detailed information on it. I should imagine though, that Siemens would have been an early mover in the QSS IC field. Sony’s use of a European IC for QSS would seem to confirm the lack of a Japanese supply source at the time. As an aside, I have seen it stated that the (Toshiba) TA7607AP was essentially the same as the Philips TDA2544, a probably lesser known variant in the TDA254x series that was the same as the TDA2540/1, except that the RF agc output was configured for mosfet vhf tuners, which were common in Japanese and North American practice. Possibly so, as that was an era when Japanese ICs were often developments of western counterparts; one thinks of the Hitachi and Toko works that were based upon the CA3089 (FM IF subsystem) and MC1310 (PLL MPX decoder), and as recorded in the Ambit catalogues of yore.

So the foregoing establishes 1981 as the year in which the QSS technique arrived in commercially usable form, and that it was an essentially European development. That was also, I think, the year in which the IRT Zweiton TV stereo sound broadcasts started in Germany, so it does suggest that the two are linked. The reasons for using QSS (or at least not using the conventional intercarrier technique) are well covered in the Fockens and Eilers paper, although I imagine that there were European papers that covered the same ground. TV stereo sound arrived a little later in the USA, and it would appear that where QSS was used, European practice was followed (5).

TV stereo sound started in 1978 in Japan, using the FM-FM system, which raises the question as to what was done there prior to the arrival of QSS in the way of overcoming the conventional intercarrier sound problems, which per the above-mentioned Hitachi paper were known to exist. I don’t have definitive information, but from what I can glean, one approach, by Matsushita, was to adopt PLL fully-synchronous vision demodulation with a very narrow reference bandwidth, the latter effectively disposing of the Nyquist slope problem, and so allowing the production of a “clean” intercarrier from the vision demodulator. In the US, National Semiconductor, who developed a set of ICs for the US BTSC TV multichannel sound system, advocated the same approach. Another approach was to revert to the split sound technique. Sony did this for the VTX-1000R US version of its Profeel TV tuner, which I imagine was based upon a Japanese domestic model. The sound carrier was separated ahead of the vision IF SAW filter, and then down converted to 10.7 MHz for amplification and demodulation. The down conversion frequency changer included a VCO that was “steered” by afc from the FM demodulator. But the VTX-1000R also included a secondary, conventional intercarrier sound circuit for use when receiving cable and other transmissions significantly contaminated with incidental phase modulation (IPM) on both vision and sound carriers, this being self-cancelling with conventional intercarrier (and QSS) but not with split sound, nor with intercarrier derived from narrow-band PLL vision demodulation. I wonder if Japanese TV transmissions of the era were generally very “clean” in an IPM sense, such that split sound was usable without the intercarrier backup.

Now returning to Plessey’s endeavours, it seems to have had something of a false start in 1978 or thereabouts with its SW180 SAW filter and SL1440 IF IC. As I recall, these were covered in a Television magazine article of the time – which I no longer have or at least cannot find - that also announced the SL1431/2 if preamplifier ICs that were intended to drive SAW filters. The SW180 separated the vision and sound carriers which had separate output ports, that for sound containing only the sound carrier. The SL1440 IF IC had two channels, one each for vision and sound, the former producing video baseband and the latter intercarrier. Each channel was described as having a wideband, switching demodulator, which I should take to mean that they were of the quasi-synchronous type without tank circuits. The sound demodulator was switched by limited carrier from the vision channel. Thus the SL1440 did not solve the Nyquist slope problem. Possibly Plessey thinking was that avoiding co-processing of the vision and sound carriers in the later IF stages was a key desideratum. Anyway, it would appear that the SL1440 was not taken up by the setmakers in a significant way, and faded from the scene. The XL1441 mentioned above appears to have been a true QSS development of the SL1440, also with the apparent addition of a tank circuit on the vision side. Interestingly though, the SW180 type of SAW filter reappeared with the later advent of single-reference QSS systems, out-of-scope for this posting.

An interesting reference included in the Fockens and Eilers paper was another IEEE paper, “A System Approach to Synchronous Detection”, by Rzeszewski of Quasar, 1976 (6). This studied the errors, primarily in respect of vision demodulation, that arose from the wideband, Nyquist slope-affected reference in conventional quasi-synchronous demodulation. Although the primary recommendation was to use a narrow band reference channel as could be achieved with PLL fully synchronous demodulation, it was noted that the quasi-synchronous system could be improved by re-establishing the double sideband integrity of the vision carrier in the reference channel before amplitude limiting. And that was exactly what was done with QSS, but then only in respect of the sound channel, established practice being retained as far as the vision IF channel was concerned.

It is interesting to note that the 1969 Motorola paper (7) on the original vision synchronous demodulation IC, the MC1330, it was noted that given the separation of the signal and the switching channels, it was possible to operate on the latter by the use of selective circuits, and a diagram was shown with such selective circuits both ahead of and behind the limiter. Although not elaborated as such in the paper, the pre-limiter selective circuit provide the opportunity to neutralize the Nyquist slope in the reference channel. In practice though the MC1330 made provision only for a post-limiter selective circuit, in practice the customary tank circuit, as the connection from the vision IF amplifier to the limiter was fully internal. In hindsight it looks to have been an opportunity missed, but the same arrangement was repeated on subsequent vision quasi-synchronous demodulation ICs, including the TBA440 et seq, the TCA270 and the TDA254x et seq.

Motorola’s next effort was the MC1331 in 1974 (8). This was essentially an MC1330 with detailed improvements, and with the addition of a separate multiplier for intercarrier generation, the idea being that the sound carrier was trapped out ahead of the vision demodulator, so reducing the sound-colour subcarrier beat in the vision channel. But both the vision demodulator and the sound carrier multiplier were still fed internally by the vision carrier limited without prior correction of the Nyquist slope, so there was no step forward in that direction.

I am not sure that Nyquist-slope-corrected quasi-synchronous vision demodulation, as advocated by the Quasar paper, was ever widely adopted for consumer TV receivers. Rather the more general use of PLL fully synchronous vision demodulation in the later analogue years addressed the issue.

Nevertheless, a professional example is provided by the BBC RC1/511 receiver, as shown in block diagram form in Wireless World July 1984, p.39, copy attached. Here there was a separate vision carrier reference channel that was appropriately tailored and used both for vision demodulation and generation of what is called a “true intercarrier”. Evidently a similar approach was used for the RC5M-503 UHF Rebroadcast receiver (see: http://www.bbceng.info/EDI%20Sheets/10548.pdf ), for which it was stated: “Selectivity and Nyquist shaping are obtained by the use of a specially developed surface acoustic wave (SAW) filter, and no IF alignment is required. A second SAW filter extracts the vision IF carrier, and the amplitude modulation is removed by low-phase-shift limiters; the resulting carrier is used to demodulate the vision signal. This "exalted-carrier", or "pseudo-synchronous", demodulation means that the effect of any incidental phase modulation (IPM) present on the input is greatly reduced.” Interestingly the RCM-503 replaced earlier rebroadcast receivers, RCM-501 and RCM-502, that had used fully synchronous vision demodulation.

It may also be noted that occasionally, the Nyquist slope issue in respect of intercarrier generation had been addressed in the past in the pre-IC days. The Bang & Olufsen 3000 series was an example. It had a discrete bipolar vision IF strip with a side chain that branched off after the 2nd main IF stage, and led to the separate diode demodulator that provided the chroma and intercarrier signals. The overall frequency response of the sidechain was peaked on the vision carrier with the colour subcarrier about 12 dB down, and the sound carrier further down (9). I think that the main objective was to keep the vision carrier sufficiently higher than the subcarrier to minimize single sideband distortion, but the sound also benefitted because the intercarrier was formed with a “reference” that was devoid of the Nyquist slope.

Cheers,

Steve

(1) K.B. Benson, revised by J.C. Whitaker; “Television Engineering Handbook”, Revised Edition; McGraw-Hill, 1992; ISBN 0-07-004788-X
(2) “Intercarrier Buzz Phenomena Analysis and Cures”; P. Fockens & C.G. Eilers, Zenith Radio Corporation; IEEE Transactions on Consumer Electronics, Vol CE-27, No. 3, August 1981.
(3) Plessey Consumer Integrated Circuit Handbook, March 1981 may be found on-line at: http://archive.org/details/ConsumerInte ... itHandbook.
(4) J. Yamada & M. Uematsu, Hitachi Ltd; “New Color TV Receiver with Composite SAW IF Separating the Sound and Picture Signals”; IEEE Transactions on Consumer Electronics, Vol CE-28, No. 3, August 1982.
(5) See: S. Prentiss; “AM Stereo & TV Stereo New Sound Dimensions”; TAB Books, 1985; ISBN 0-8306-1932-1.
(6) T. Rzeszewski, Quasar Electronics Corporation; “A System Approach to Synchronous Detection”; IEEE Transactions on Consumer Electronics, May 1976.
(7) G. Lunn, Motorola, Inc.; “A Monolithic Wideband Synchronous Video Detector for Color TV”; IEEE Transactions, Vol. BTR-15, No. 2, July 1969.
(8) M.E. Wilcox, Motorola, Inc.; “A New TV Video/Sound Detector IC”; IEEE Transactions on Broadcast and Television Receivers, Vol. BTR-20, Issue 1, 1974.
(9) As described in: G.J. King; Colour TV Servicing Manual Volume One; Newnes-Butterworths, 1973; ISBN 0 408 00089 9.

 
Posted : 03/08/2013 1:22 am
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

I have recently found a good early commentary on intercarrier sound in this book:

Sid Deutsch; Theory and Design of Television Receivers; McGraw-Hill, 1951.

There is a chapter dedicated to intercarrier sound, additional to the main chapter on the sound section (covering the split sound technique). It is the most detailed treatment of the topic that I have seen so far.

Deutsch discussed both the spurious amplitude modulation and spurious frequency modulation (although these days it would more likely be referred to as phase modulation) that are impressed upon the intercarrier by the vision carrier. The spurious frequency modulation was said to be unavoidable, but it could be reduced by reducing the slope of the vision IF curve in the vicinity of the vision carrier, i.e. the Nyquist slope. Elsewhere in the book the distortion caused by vestigial sideband operation is also noted.

Additionally, although the spurious amplitude modulation could be removed by limiting, it was observed that a ratio discriminator was not effective against this kind of amplitude modulation. Where limiter action and circuit economy were desired, the 6BN6 valve was suggested, as it provided a combination limiting, discrimination and AF amplification. (In the main sound chapter, it is explained why the ratio discriminator will suppress noise spikes but not in-band AM.)

It would appear that many subsequent treatments of intercarrier sound, at least in the mono sound era, mentioned the spurious AM, and the fact that it could be removed by limiting, but neglected to mention the spurious FM.

Elsewhere in the book the (quadrature) distortion caused by vestigial sideband operation was also noted. Subsequent treatments in the pre-colour era often seem to have missed this, instead focussing on the easily-corrected amplitude distortion.

Cheers,

Steve

 
Posted : 22/10/2013 4:04 am
colly0410
(@colly0410)
Posts: 241
Member Deactivated Account
 

As the French in their SECAM-L system used positive modulation & AM sound I presume they couldn't use the intercarrier method, did they use a separate sound IF channel up to the detector? Also I can remember that 405 TV's nearly always had a bit of vision on sound buzz or sound on vision. When I saw SECAM-L pictures in France they were perfect, no buzz on sound & no picture disturbance, is it the SAW filters that made the difference?

 
Posted : 16/04/2014 7:12 pm
Anonymous
(@anonymous)
Posts: 16846
Group Deactivated Account
 

Later sets in SAW era did use Intercarrier with AM sound on some ICs with synchronous video demodulators instead of merely a diode.

 
Posted : 16/04/2014 7:26 pm
Terrykc
(@terrykc)
Posts: 3996
Member Rest in Peace
 

... Also I can remember that 405 TV's nearly always had a bit of vision on sound buzz or sound on vision ...

Not with correctly adjusted receivers, you wouldn't ...

I wouldn't guarantee that all sets were perfect but most performed well - assuming the customer knew how to carry out the fine tuning correctly.

If you lived in a particularly high signal level area at the time, mal (or non) adjustment of the local/distant setting by the dealer could be a contributory factor leading to overloading ...

When all else fails, read the instructions

 
Posted : 17/04/2014 12:38 pm
colly0410
(@colly0410)
Posts: 241
Member Deactivated Account
 

The telly we had (second hand clapped out thorn 850 I think) you'd turn the fine tuning one way & the sound would buzz, turn it the other way & the picture would jump in time with the sound, never seemed to be able to get it spot on. Loved it when Mum bought a Sony 1800 inches colour telly, the sort that converted PAL to NTSC, brilliant picture once you'd got the hue control right. Sorry mods if I've drifted off topic, I'll repent. :)

 
Posted : 17/04/2014 8:43 pm
Terrykc
(@terrykc)
Posts: 3996
Member Rest in Peace
 

I noticed that most double-decker Thorns that passed through our workshop tended to have poor vision on sound rejection on Channel 1.

Somebody once told me it was because Thorn designed them with the rejectors directly over the mains dropper - which they were - and the constant heat caused some change to occur which reduced their efficiency.

As the rejector alignment was spot on on every set I checked, I didn't really buy into that idea but I have to admit that our Channel 1 Sound/Vision ratio was somewhat high (Vision 2mV, Sound 200µV) at 20 dB, so that could have explained it to a degree ...

I don't recall any other makes having a problem with it, though ...

When all else fails, read the instructions

 
Posted : 17/04/2014 9:19 pm
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

The Motorola MC44302A was an example of a late analogue-era IC that included intercarrier sound for AM as well as FM. It used PLL fully synchronous vision demodulation, which provided a very “clean” reference signal, essentially free of any sidebands, something that would not be achieved with conventional quasi-synchronous demodulation, even in the improved QSS form. The quadrature of the reference was used to generate the intercarrier to further reduce the possibilities of getting any vision interference on to the sound carrier. One way of looking at it is that there was a second conversion for the sound carrier, using a very pure second oscillator signal that was phase-locked to the vision carrier.

I must admit that I was a bit surprised when I first found information about this IC, as the conventional wisdom was that intercarrier sound was not doable with AM systems. Perhaps even more surprisingly, the vision and sound carriers were amplified together in the IF part of this IC, so that section must have been remarkably linear.

Otherwise System L receivers used split sound. Back in the days of discrete device IFs with distributed selectivity, I imagine that the sound was split off either after the tuner or after a 1st common IF stage. Once IC visions IFs arrived, initially preceded by block L-C filters, it would seem likely that the sound was split off either ahead of or in the block filter. Early SAWFs for System L were vision only, with a deep notch at the sound carrier, which meant that the sound split off point was ahead of the SAWF. But fairly early on in the SAWF were available types with two output ports, one for vision and one for sound. I think that these may have preceded the QSS-type SAWFs, with Plessey having been an early mover. Specific ICs, usually with quasi-synchronous demodulation, were developed for the AM sound channel, an example being the TDA2543 in the Philips TDA254x series. And there is no reason why the familiar MC1350 + MC1330 combination, from the dawn of the TV IC age, could not have been used. There were also ICs that handled both AM sound and intercarrier FM sound, for use in multistandard receivers.

ICs such as the MC44302A also provided a clean FM sound intercarrier without the need to resort to QSS techniques. In fact, when stereo and multichannel TV sound was new, the use of PLL fully synchronous vision demodulator ICs was advocated by some as the way to obtain clean sound. National Semiconductor was an example, with its LM1821/2/3 ICs. But QSS seemed to have become the dominant approach.

On cross-modulation between the vision and sound carriers, I imagine that some bipolar tuners could have been suspect, as they often had relatively low input signal thresholds before serious non-linearity set in. The VHF tuner in the Philips K9 (and in some later models) was an example, unable to deal with the signal levels that were easily handled by a valve turret tuner. The cross-modulation problem was a major reason why the American TV manufacturers migrated to mosfet-based VHF tuners as soon as the technology was available circa 1968-69; these could match or better valve tuner capability in terms of signal handling.

Cheers,

Steve

 
Posted : 25/04/2014 6:01 am
colly0410
(@colly0410)
Posts: 241
Member Deactivated Account
 

Interesting that you can use the intercarrier method for AM sound. If you built a modern 405 line set you could use the intercarrier sound method, mind you, you could probably use IC's & SAW filters & other improvements not available in the 50's & 60's as well. Nicam sound on 405, digital 405..

 
Posted : 30/04/2014 4:01 pm
Anonymous
(@anonymous)
Posts: 16846
Group Deactivated Account
 

You could have built a synchronous demodulator in 1930s, and more cheaply in 1948, using valves. Just expensive and a lot of space to do it well. The "homodyne" (1932, later called Synchrodyne) is a simplified Synchronous Demodulator. Except they applied it at RF at the time rather than using it as detector on a Superhetrodyne.

Jon writes about it http://www.thevalvepage.com/radtech/syn ... ction7.htm

Essentially you need an oscillator with voltage controlled tuning (various ways to do that with a valve) and at a minimum two mixers (i.e. a pair of Pentodes or Hexodes). A full balanced mixer can be implemented using two transformers and a ring of 4 diodes* (two sets required) or using a beam switching valve with two anodes. One mixer or multiplier locks the oscillator to the carrier and the other mixer is fed with 90 degree shifted oscillator and gives the demodulated AM even if there is only one sideband. A variation used is to have the oscillator at 4x IF carrier frequency and divide it twice to get 90 degree phase shift. The insides of a stereo multiplex decoder do this except it uses a pilot at 1/2 the carrier frequency which has to be compared with oscilator divided by 2 or 8. Of course the earliest Stereo decoders filtered and doubled the pilot tone to get the local oscillator to multiply with the signal. The Grundig single Pentode valve decoder uses separate germanium diodes as the mixer/multiplier.

(* The telecomms people were using ring diode mixers for SSB trunk carriers in 1930s, possibly using matched miniature copper oxide rectifiers)

 
Posted : 30/04/2014 5:25 pm
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

At this juncture it would be easy to digress into a discussion of synchronous demodulation in its various forms, but that would deserve and justify a separate topic.

As I understand it, the terms homodyne and synchrodyne, whatever their origins, came to be accepted as describing different forms of synchronous direct conversion receivers. In the homodyne case, the incoming carrier, after some conditioning (usually filtering and/or limiting) was used as the reference carrier. In the synchrodyne case, a local oscillator, locked to the incoming carrier, was used as the reference. The difference was well-illustrated in the Slifkin & Dori article “Synchrodyne/Homodyne Receiver”, Wireless World 1998 November.

Where these techniques were used for demodulation in superhet receivers, then sometimes they were referred to respectively a homodyne demodulation and synchrodyne demodulation, although in the TV IC era, more commonly as quasi-synchronous (QS) and (fully) synchronous (FS) demodulation.

One might view intercarrier sound as an ersatz form of the homodyne, used not for conversion to baseband (from incoming carrier or IF), but conversion to a second IF, using the vision IF as a reference. With the homodyne (QS) technique, correct conditioning of the reference carrier is required to avoid errors. In this light, it may be seen that the conventional intercarrier technique used the vision carrier as it arrived at the demodulator, with no conditioning, and so simply accepted the errors that came with this approach. A step change came with QSS (quasi-split sound), which in effect did appropriately condition the vision carrier reference.

Pertinent here though are the various forms of synchronous FM demodulation which in the valve era were used mostly for TV sound, and so were typically found at the end of an intercarrier IF subsystem. Synchronous FM demodulation is better known as quadrature demodulation.

The Philips nonode (originally EQ40, then EQ80) was a quadrature demodulator, as was the 6BN6 gated beam valve. Both appeared in the late 1940s, soon after intercarrier sound itself. Their attractiveness was that they were self-limiting and that they provided a relatively high level audio output, sufficient to directly drive an output valve. Good limiting was a desideratum for intercarrier sound, which was subject to AM transfer from the vision carrier. They were noticeably less linear than the Foster Seeley discriminator, though, although this seemed not to be of too much concern for TV sound applications. It was probably a reason that they were not much used in radio receivers, that and the fact that they did not readily provide a source of zero-centred AFC control voltage.

The later 6DT6 was of the locked-oscillator quadrature type, and so I think would fall into the category of being fully synchronous. Also of this type was the Philips EH90. This was counterpart to the American 6CS6, which had been developed originally for use as a noise-gated sync separator. Curiously Philips/Mullard promoted the EH90 as a TV sound FM demodulator at about the same time that it was offering the ECH84 for use as a noise-gated sync separator.

The IC era allowed relatively easy implementation of QS quadrature FM demodulation using transistor-tree multipliers, and early examples were the Sprague ULN2111A (1967) and the TAA661 (SGS-Ates?). Initially these kinds of ICs seemed to be used more for TV (intercarrier) sound than for FM radio receivers, at least until the arrival of the fully-featured RCA CA3089 in 1971. Unlike their valve predecessors, which in one unit provided limiting, demodulation and audio gain, the ICs separated the functions, and in particular the demodulators were preceded by several stages of amplification with hard limiting, usually using long-tailed pairs. As an aside, the late 1960s availability of the integrated transistor-tree multiplier does seem to have been a major event in respect of radio and TV receiver circuitry, in fairly short order finding its way into FM demodulation, colour subcarrier demodulation, FM stereo demodulation, TV vision demodulation and TV IF amplifier AGC functions. Sprague and particularly Motorola were early movers in respect of consumer-oriented ICs.

An application of the synchrodyne principle was found in the U4468B TV sound IC, which combined the QSS function with an AM sound channel. The reference for intercarrier generation was provided by a PLL rather than by limiting the incoming vision carrier. So whereas a conventional QSS IC was analogous to the homodyne, the U4468B was analogous to the synchrodyne. The U4468B AM channel was of the homodyne type. The Philips TDA9815 was a combined TV vision and sound IC, with FS PLL vision demodulation. It had two FM intercarrier channels (for Zweiton stereo), each of which had a FS PLL quadrature demodulator. The AM sound channel was of the homodyne type; evidently Philips was not as bold as Motorola and did not take the step to intercarrier AM sound. Also, unlike the MC44302A, the TDA9815 had separate vision and sound IF amplifiers.

Overall, one may see that the development of intercarrier sound from its rather crude original form to the point where it was good enough to be used for TV systems with AM was very much linked to the development of synchronous demodulation techniques in forms that were economic and otherwise suitable for consumer applications. That happened in the IC era. PLL synchronous vision demodulation was possible in the valve era, but I should imagine not easy to implement. After all, it was done for NTSC color receiver subcarrier demodulation in the early 1950s, and for example the quadricorrelator-type PLL was certainly an interesting circuit. But that was at 3.58 MHz; doing it at 45.75 MHz in an adequately stable way would, I think, have been a “whole other” challenge.

As previously mentioned in post #4, Deutsch provided a detailed commentary on intercarrier sound that analyzed its benefits and faults without presenting it as being “something for nothing”. In particular, costs for properly executed intercarrier sound were said to be about the same as for split sound, so that cost reduction required some corner-cutting, for which intercarrier sound presented more opportunities.

Deutsch also made the interesting observation that the advent of TV receivers with intercarrier sound and their quick domination of the US market effectively killed the combination TV-FM receiver, which was necessarily of the split sound type (unless extra complexity and/or compromise could be tolerated.) This was apparently antithetical to the growth of FM broadcasting in its early US days. The combination receiver of course allowed the many consumers who previously had only an AM radio receiver to step into both TV and FM with one purchase whose incremental cost relative to a TV-only receiver was quite modest.

It should be said though that Deutsch was writing just before the move to higher TV IFs in the USA. At the time, IFs in the vicinity of 25.75 MHz vision and 21.25 MHz sound were commonplace, and 21.25 MHz or thereabouts was acceptable and workable for FM reception. On the other hand, the new standard “high” TV sound IF of 41.25 MHz would have been less suitable, so that in and of itself might also have suppressed TV-FM receivers if that event had not already occurred.

The UK situation had some parallels. TV-FM receivers became available at the dawn of FM broadcasting, which also corresponded with the advent of multichannel TV receivers and the adoption of “high” TV IFs. As well as many first-time TV buyers, there were probably quite a few potential customers with early TV receivers that were due for replacement because they were not easily converted for multichannel tuning and/or they were otherwise obsolescent because of small screen size etc. Thus the combination TV-FM receivers allowed simultaneous capture of the new FM service as well as the expanded TV service. UK TV receivers of the time were necessarily of the split sound type, which was generally in their favour, but TV sound was AM, which was a complication, in that provision had to be made for both AM and FM sound demodulation. Whilst there was some early use of the TV sound 38.15 MHz IF for the FM side, this was typically not very successfully, and the modal approach to the combination appeared to be a dual-frequency, 38.15 and 10.7 MHz IF strip. Murphy differed in using a second conversion from 38.15 MHz for both TV sound and FM, thus borrowing an idea that Philips had used previously in its early Belgian multistandard receivers.

In the UK, the intercarrier sound issue did not raise its head until dual-standard TV receivers became necessary in advance of the commencement of 625-line broadcasts. By that time TV-FM receivers were probably on the fade anyway. One may postulate that they would have been largely a single-generation phenomenon in any given market. During that generational time, many AM-only radio receivers would have been replaced or supplemented with FM-AM models, so that having an FM facility in the next TV receiver would have been something that few customers really needed. But the use of intercarrier FM sound for dual-standard TV receivers would have been the nail-in-the-coffin for TV-FM combinations. I understand that there were just one or two dual-standard TV-FM combinations available in the UK very early in the dual-standard era, and that these probably used the 6 MHz intercarrier also as the FM IF. That was surely something of a compromise, but perhaps the only way to do it without undue complication.
Cheers,

Steve

 
Posted : 01/05/2014 4:46 am
valvekits
(@valvekits)
Posts: 780
Member Deactivated Account
 

Interestingly if intercarrier sound was a major driver of SAW filter development, than one might assume that there was simultaneous development on vestigial sideband filters for broadcasting that had even more demanding requirements. Possibly one of the seeds of the digital era?

Eddie

 
Posted : 23/05/2014 5:36 pm
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

Possibly VSB SAW filters were developed for professional applications, such as CATV, before they became economically feasible for TV receivers and other consumer applications.

Here is the abstract from a 1975 November Plessey paper, which confirms the general case that the professional applications came first.

“The surface acoustic wave (SAW) technology has existed in many laboratories for a number of years. It has already found a place in many military and professional systems. Perhaps its most successful application to date has been in providing pulse compression filters for chirp radar. Recently, however, SAW bandpass filter design has reached the state where a single device can perform all the filtering necessary in the I.F. stage of colour T.V. receivers. The added advantages of the SAW solution over traditional LC methods are immediately obvious. No manual alignment is necessary. The SAW filter is compact, rugged and reproducible.”

I do not have the full paper; the above abstract is from the IEEE site at: http://ieeexplore.ieee.org/xpl/articleD ... ustic+wave.

The early SAW filters for TV receivers, such as the SW150, appeared to simply mimic the block LC filters that they were intended to replace. Thus those for negative/FM TV systems included the customary sound carrier shelf associated with the conventional intercarrier sound implementation. For positive/AM System L, the SW450 included a sound carrier trap. One assumes that in that case the sound carrier was extracted by an LC circuit ahead of the SAW filter.

Plessey was early with two-output port TV SAWFs, such as the SW180. This separated the sound and vision carriers, the sound output being the sound carrier alone. I don’t think that this type had many applications at the time (except perhaps for a System L version if there was one). The next step was to have the second output port configured for QSS; that is with a symmetrical sideband vision carrier as well as the sound carrier.

In later years there seem to have been a greater diversity of TV SAW filter types, possibly to match later vision and sound IF ICs such as the TDA9815. The variety included sound carrier only filters, and dual-Nyquist slope filters (whose specific frequencies I still need to work through for additional commentary in the TV Receiver IF thread). A large number, including VSB filters, are described here:

http://www.quartz1.com/price/techdata/SAWFilter(SIEMENS).pdf

That said, I think it would take some searching to ascertain when VSB SAWFs first became available for CATV and other professional applications.

Cheers,

Steve

 
Posted : 24/05/2014 5:13 am
Anonymous
(@anonymous)
Posts: 16846
Group Deactivated Account
 

The Samsung VHS machines from later 1990s that I dismembered after repairing as many as possible of course didn't use intercarrrier but separate SIL dipped SAW filters instead of earlier round can and for Nicam as well as FM sound and video. But slightly earlier models with no Nicam used Intercarrier sound, some with 5.5 and 6.0 filters.

 
Posted : 24/05/2014 11:40 am
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

I have now found and read the 1947 Parker article mentioned in my initial posting, namely: Parker, L.W., TV Intercarrier Sound System, Tele-Tech., 6:26 (October, 1947).

The article was billed as being a relatively simple explanation of the intercarrier technique, and its overall tone was on the optimistic side.

Apparently the technique had been proposed by Parker a few years previously. That would explain why Parker’s work seems to have taken precedence over that of Dome. Also, the “intercarrier” name was applied by the FCC. Previously it was referred to a “difference frequency” system.

Parker ran through the basic technique in the article. His basic layout showed the intercarrier as not being extracted until after the final video amplifier, one point of optimism. He went to some trouble to show that AM transfer from the larger carrier (vision) to the much smaller carrier (sound, after attenuation) was not of huge proportions. Nothing was said, though, of PM transfer because of the Nyquist slope, etc.

Another point of optimism is embodied in this comment: “The use of a limiter ahead of the discriminator is optional. If a great enough ratio of amplitudes is maintained between the picture and the sound carriers before they reach the second detector, substantially no amplitude modulation occurs on the 4.5 mc FM carrier, even when the picture carrier is amplitude modulated.”

But that is trumped by this one: “It is interesting to note that the sound need not be FM when using this system. It could be separated from the amplitude modulated picture carrier by the above described means even if it were itself amplitude modulated.”

In practice of course intercarrier sound did not work out quite as Parker seems to have envisaged. Although intercarrier extraction after the final video amplifier was found in some cases, it was viewed as being a “low end” approach that was prone to cross-modulation due to video amplifier non-linearities. More usually, the intercarrier was extracted after the vision demodulator, meaning that more intercarrier IF gain was required. And in color receivers, quite often the intercarrier was extracted by a separate rectifier ahead of the vision demodulator in order to avoid intermodulation beats and cross-modulation between the sound carrier and the color subcarrier. The arrival of quasi-synchronous vision modulation did not entirely solve the color beat problems, as evidenced by Motorola’s development of the MC1331, with a separate multiplier for intercarrier generation, as an improvement over its original MC1330.

Heavy limiting of the intercarrier was in practice also desirable to minimize the spurious AM. This was not easily done until the IC age arrived with the RCA CA3014 in 1966Q1. It used integrated differential amplifiers, which made particularly good limiters that did not themselves introduce spurious PM. At the time RCA made the comment: “Conventional f-m limiter-detectors are limited by cost from using enough devices to do an ideal job, but integrated circuits do not have this limitation and can be expected to perform better, particularly under fringe-area receiving conditions”.

Use of the intercarrier technique for AM sound was not workable until late in the analogue age when IC-based PLL fully synchronous vision demodulators became more common. And it was only possible because the reference carrier was a very “clean” locally generated signal, phase-locked to the incoming vision carrier but essentially devoid of any AM or PM.

On the basis of Parker’s original claims, there should not have been any problems with intercarrier sound when stereo and multi-channel audio systems arrived. But these in fact showed up the fundamental flaws whose effects had been to some extent tolerable with mono and often lowish quality sound channels, but were then unacceptable.

Parker recognized that intercarrier sound made combined TV-FM receivers more difficult, and suggested that if they were required, the “intercarrier” in the FM case could be generated by including a suitable oscillator (running at vision IF) in the receiver. He also thought that intercarrier sound would be beneficial for European TRF TV receivers, by way of eliminating most of the sound receiver.

Subsequent treatments of intercarrier sound in the literature appear to generally follow the Parker precepts, although usually less optimistically so. On the other hand, the excellent and in-depth critique by Deutsch seems to have gone largely unnoticed, although the underlying problems of the simple intercarrier system that he noted emerged again when the multichannel sound age arrived.

Even in the early days, it seems that intercarrier shortcomings were appreciated by some of the setmakers. In FM-TV Journal for 1951 June, there was an article on the Craftsman TV chassis for custom high-fidelity installations. In this was said: “An examination of the block diagram of Fig. 3 will reveal that a co-channel sound system has been deployed in both models. It was felt necessary to use this type rather than an intercarrier system in order to meet the precise requirements of high-fidelity equipment where extremely low noise level and a minimum of distortion are demanded.”

The Parker article can be found here: http://www.americanradiohistory.com/Arc ... 47-10.pdf; p.26ff.

The article on the Craftsman TV receiver is at: http://www.americanradiohistory.com/Arc ... -06.o.pdf; p.13ff.

Cheers,

Steve

 
Posted : 05/09/2014 6:20 am
colly0410
(@colly0410)
Posts: 241
Member Deactivated Account
 

Thanks for the American radio links Steve, they're now in my bookmarks..

 
Posted : 05/09/2014 1:20 pm
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

I recently came across a magazine article that shows that even very early in the intercarrier sound age, at least one American setmaker, namely GE, made an effort to overcome its potential disadvantages. A pertinent excerpt from that article (dated 1952 January) is attached.

GE used an IF sidechain to generate the 4.5 MHz intercarrier. This sidechain was tapped off at the anode of the vision 2nd IF stage and fed into an IF amplifier stage (referred to as the 45.75 MHz sound take-off stage) whose anode was tuned to the vision IF of 45.75 MHz. This in turn fed a crystal diode mixer wherein the 4.5 MHz intercarrier was generated and then fed to a conventional 4.5 MHz intercarrier IF strip.

One supposes that the objective here was to have the vision signal following the sound take-off stage to have been more-or-less symmetrical about the vision carrier (45.75 MHz) and of relatively narrow bandwidth. Thus its bandpass characteristic would have looked somewhat like that around the vision carrier at the output of a latter-day quasi-split sound (QSS) SAWF, although I suspect with more gently sloped skirts. This symmetry would have eliminated or reduced two sources of vision carrier PM that plagued conventional intercarrier implementations. Firstly, the vision carrier would not be carrying any PM caused by the Nyquist slope, and secondly, any PM caused by the sideband asymmetry inherent in the vestigial sideband transmission would be reduced, given that the overall bandpass response would have been down quite a bit in the single-sideband region, below 45.0 MHz.

Actually, assuming that the Nyquist slope was formed early in the vision IF chain, then its effects on the signal would have been present at the input to the sound take-off stage. However, I think that it could have been cancelled by tuning the anode load of that stage to a point slightly above 45.75 MHz, thus producing the requisite amount of reverse slope at 45.75 MHz. I imagine that in practice, the anode tuned circuit of the stage would have been adjusted to produce a curve centred on and essentially symmetrical about 45.75 MHz when measured with a swept signal injected into the front end of the vision IF strip.

The circuit at interest did not seem to include the sound carrier attenuation early in the IF strip that was characteristic of conventional intercarrier systems, but it might not have been necessary. The sound take-off stage bandpass would in any event have placed the 41.25 MHz sound carrier down quite a bit as compared with the vision carrier, and I suspect that the need not to over-attenuate the sound carrier would have determined the upper limit for the Q of the anode IFT coil of stage.

Given that the 41.25 MHz sound carrier was on the shoulder of the take-off stage bandpass curve, then it would have had an upwardly tilted bandpass, which in turn would have been transferred to the 4.5 MHz intercarrier, although reversed in sense, because the diode mixer was working in an oscillator-high mode. This tilt could have been corrected by tuning one of other of the 4.5 MHz IFTs slightly high, so as to achieve overall symmetry around 4.5 MHz at the ratio demodulator, as referred to a signal injected into the front end of the IF strip.

As may be seen, this GE circuit incorporated some of the elements of QSS, which it preceded by around 30 years. QSS of course, with its take-off effectively ahead of the vision IF bandpass shaping, avoided the need to cancel the effects of the Nyquist slope and also, with its “Bactrian” bandpass, provided for a symmetrical response around the sound carrier. So it was a brave effort by GE, and one that was probably swept away in due course by the putative need to reduce valve count, etc. I can imagine that the sound IF required above-average alignment time on the production line, which would also have counted against it. That GE sound IF strip was quite generous in all, with, as well as the take-off stage, two 4.5 MHz stages, one of which was a limiter, preceding the ratio demodulator. So its valve count was probably no less than a split sound approach would have required, although with the latter, AFT (AFC) was highly desirable. That would have required another valve or valve section for a reactance modulator, although a major difficulty may have been accommodating such on the latest crop of front end tuners, where the trend had been to more compact two-valve units. As an aside, as best I can work out, AFT, which had been used to some extent in the late 1940s on split-sound receivers, more-or-less disappeared from US TV receivers when the intercarrier age arrived, then was reintroduced about 1958 by Westinghouse, by which time voltage-operated crystal diodes were used to adjust oscillator frequency. AFT was about the second TV receiver function to be integrated (RCA CA3034) following the integration of the intercarrier IF strip (RCA CA3014), and because its integration preceded that of the vision IF strip, early ICs for the latter (Motorola MC1350/1352/1330 and RCA CA3068) typically excluded the AFT function.

Returning to intercarrier sound, quite interesting is the attached article about the development of a tuning indicator for intercarrier receivers. It is related to the GE case in that it used a 45.75 MHz vision IF sidechain with amplifier and rectifier to develop the bias to drive a 6BR5 (EM80) magic eye. In this case the 45.75 MHz sidechain circuit was quite narrowband. The need to tune the sidechain amplifier anode circuit for reasonable symmetry is noted. Whilst the article itself had a Canadian origin, the actual circuit appears to have come from Philips, which would explain the use of the EM80. Had it been of North American origin, one might have expected it to include a discriminator and a 6AL7GT magic eye, the latter, developed for use in FM tuners, providing both signal strength and centre-channel indications. (The 6AL7GT seems to have been rare in European practice, although RCA UK did use it in its New Orthophonic FM tuner, see Wireless World 1956 July, p.338.) With a discriminator, the circuit could also have provided AFT. In fact, it seems not beyond the bounds of possibility that a broadly similar sidechain circuit could have been devised to provide a combination of improved intercarrier sound, AFT and tuning indicator drive. Still, Westinghouse did quite differently when it developed its AFT system for intercarrier receivers, see Radio-Electronics 1958 February, page 56ff; http://www.americanradiohistory.com/Arc ... 958-02.pdf.

Cheers,

Steve

 
Posted : 06/02/2015 6:29 am
Synchrodyne
(@synchrodyne)
Posts: 552
Honorable Member Registered
Topic starter
 

Earlier in this thread I noted that by my own observation, many treatments of intercarrier sound, at least in the mono sound era, mentioned the spurious AM problem, and the fact that it could be removed by limiting, but neglected to mention the spurious FM problem that was not amenable to any post-treatment.

This observation is confirmed by a comment in Gosling (1), wherein it was stated: “The disadvantages of intercarrier reception are less well known.” This was followed by a succinct analysis of the problems, including that caused by the Nyquist slope in the vision IF channel. That paragraph opened with: “A fundamental cause of ‘buzz-on-sound’ results from the use of vestigial sideband (VSB) transmission and reception.” Thus it was made clear that this cause was not an implementation or an alignment issue, but was endemic to the conventional intercarrier system.

I have attached scans of the pertinent pages from Gosling, as they include a rather neat set of vector diagrams that explain how the Nyquist slope causes incidental phase modulation of the vision carrier.

Cheers,

Steve

(1) William Gosling, editor; “Radio Receivers”; Peter Peregrinus, 1986; ISBN 0-86341-056-1.

 
Posted : 18/06/2016 2:09 am
Page 1 / 4
Share: