Featured
Latest
Share:
Notifications
Clear all

Forum 1

Intercarrier Sound

63 Posts
10 Users
40 Reactions
12.9 K Views
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

I have now read the R.B. Dome article mentioned in my opening post, namely”

R.B. Dome: Carrier difference reception of T.V. sound. Electronics, Jan. 1947.

It is available on-line at: http://www.americanradiohistory.com/Arc ... 947-01.pdf, p.102ff.

It is a straightforward treatment of the subject. It does refer to the lower signal-to-noise ratio produced by intercarrier systems as compared with split-sound systems, 5 dB lower on average, but varying with picture content. And it does cover the incidental phase modulation produced by the Nyquist slope, although in the author’s view it was negligible.

An interesting comment therein is:

“It is recommended that the peak deviation of the sound transmitter be increased to ±40 kilocycles from the present ±25 kilocycles. This will aid in masking any inadvertent frequency modulation present on the picture carrier.”

The recommendation for ±40 kHz deviation was repeated in the section addressing the Nyquist slope effects. That suggests that the intercarrier system was nevertheless seen as being borderline or close to borderline with the established ±25 kHz deviation, and that somewhat greater deviation was desirable.

The irony is that back in 1941, NTSC specified the sound carrier with ±75 kHz deviation and 100 µs pre-emphasis, in order to align with what was specified for FM broadcasting in its original 42 to 50 MHz band. (1)

In the TV case, the maximum deviation was reduced to ±25 kHz because that still provided an adequate signal-to-noise ratio in any situation where the received signal was strong enough for adequate picture quality, and to reduce the signal bandwidth required in the sound channel, tantamount to allowing more oscillator drift for a given bandwidth before distortion set in.

In both the FM and TV cases, the pre-emphasis was reduced from 100 to 75 µs.

These changes appear to have happened in 1945, following the move of the FM band to 88-108 MHz and the post-WWII TV channel assignments. The 75 µs pre-emphasis curve was included in the FCC “FM Standards of Good Engineering Practice” issued 1945 September 20. (2) Evidently the 100 µs standard remained in place for the old 42-50 MHz FM band. (3)

In the TV case, the FCC “Standards of Good Engineering Practice Concerning Television Broadcasting Stations”, was issued 1945 December 19 and amended 1950 October 10. As best I can determine, the 1950 amendments referred to the CBS sequential colour system (4).

That FCC standard specified ±25 kHz deviation and 75 µs pre-emphasis. However, it included the additional comment: “It is recommended, however, that the transmitter be designed to operate satisfactorily with a frequency swing of at least ±40 kilocycles.” One wonders whether this is wherefrom Dome obtained his ±40 kHz recommendation. Otherwise one might have expected him to “jump” from ±25 to ±50 kHz, rather than picking an apparently arbitrary intermediate number.

RETMA standard TR-104-A, of 1949 October (5) paralleled the FCC standard and was in some ways more severe, for example in respect of vision bandwidth. On the sound side it was said: “The sound transmitter shall have a modulation capability of at least ±50 kc per sec.”

Be that as it may, the NTSC sound maximum deviation remained at ±25 kHz until the BTSC MTS was introduced in the early 1980s. So one might say that intercarrier sound, at least with the NTSC system, was from the start somewhat comprised in terms of the chosen parameters, in addition to its inherent flaws.

One wonders if the European choice of ±50 kHz deviation for TV sound was based upon this American background. The history there is obscure, but it does seem as though the ±50 kHz, 50 µs numbers came from the original Russian work with 625-line TV, and were also used for Russian FM at the upper end of Band I.

Cheers,

Steve

(1) See: Donald G. Fink; Television Standards and Practice; McGraw-Hill, 1943; p.363 & 367; on-line at: https://ia802303.us.archive.org/35/item ... tirich.pdf.

(2) Available in “FM and Television”, 1945 October, p.28ff; on-line at: http://www.americanradiohistory.com/Arc ... 5-10.o.pdf.

(3) See :Radio News”, 1945 June, p.41; on-line at: http://www.americanradiohistory.com/Arc ... 5-06-R.pdf.

(4) See: Donald G. Fink; Television Engineering; McGraw-Hill, 1952; LCC 51-12605; p.691ff.

(5) See: Donald G. Fink; Television Engineering Handbook; McGraw-Hill, 1957; LCC 55-11564; p.246ff.

 
Posted : 12/07/2016 4:58 am
Sundog
(@sundog)
Posts: 173
Member Deactivated Account
 

It's amusing that the US should increase the sound deviation (=turn up the volume) so the nasties were less noticeable.

This mimics their "cure" for visible flyback blanking - raise the black level by 12.5 IRE and so the viewer will turn down the brightness and so diminish the faint flyback lines.

 

 
Posted : 10/12/2021 5:57 pm
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

I suppose that informally increasing the deviation avoided the need to make yet another change in the basic specification.

The original NTSC 525/60 standard, in 1941, specified ±75 kHz maximum deviation and 100 µs pre-emphasis. These parameters were carried over from those established for FM sound broadcasting. And the 100 µs pre-emphasis was the same as had been specified for the AM sound channel of the RMA 441/60 TV standard.

The change to ±25 kHz maximum deviation and 75 µs pre-emphasis was made in 1945, co-incident with the revised TV channel assignments. At the same time FM was changed to 75 µs pre-emphasis, although retaining ±75 kHz deviation, co-incident with its move to the 88-108 MHz band. In the TV case, the argument for lower maximum deviation was that transmission range was limited by vision channel reception deterioration, and that ±25 kHz was all that was needed to ensure that the sound transmission range reasonably matched the vision range. At the time, intercarrier sound was still in the future.

Regarding the pre-emphasis, work done by the BBC c.1945 showed that in fact 50 µs was preferable to 100 µs. Presumably the Russians came to the same conclusion. Later, following the American change to 75 µs, the BBC thought that 50 µs was still better. The American choice of 75 µs might have been made purely on technical grounds. But it might also have been a compromise, keeping the error reasonably small where 100 µs receiving equipment was used with 75 µs transmissions. There were a few dual-band FM receivers in the immediate post-WWII period, but as far as I know, these do not appear to have had different de-emphasis curves for the two bands. An assumption here is that established transmitters in the 42-50 MHz band stayed with 100 µs for their remaining life.

The European choice of ±50 kHz deviation for TV FM sound may well have been made with intercarrier receivers in mind. It could also have followed the Russian precedent, which was presumably embodied in the initial 625/50 standard of 1944, pre-dating intercarrier. In that case it may have been chosen simply to align with what was planned for the Russian FM broadcasts in Band I, which also had ±50 kHz deviation. The latter was not a forced choice for Band I, as the Americans had used ±75 kHz in the 42-50 MHz band. So presumably the Russians wanted to fit more FM channels into the available spectrum space. A later consequence was that a different stereo multiplex system was required to the Zenith-GE system used elsewhere.

With the American BTSC multi-channel TV sound system of the early 1980s, total sound carrier deviation went to ±75 kHz, of which the baseband remained at ±25 kHz, interleaved with the difference subcarrier at ±50 kHz, the remaining ±25 kHz being taken up by the pilot tone, the SAP channel and the professional channel. Even so, the difference subcarrier had DBX noise reduction in order to minimize the effects of interference from the vision channel in intercarrier receivers. Higher quality receiving equipment nonetheless tended to avoid the intercarrier technique, using either split sound or the European quasi-split approach. As best I can determine, split sound made a reappearance in Japanese practice following the introduction of the FM-FM TV stereo sound system there. The better screening that was possible with compact and cool-running solid-state circuitry enabled a sound IF second conversion to 10.7 MHz, a choice that would have been problematical in the valve era. But quasi-split was also developed in Japan, initially using a standard vision IF/demodulator IC in the sound channel, although later I think following the European example with a dedicated IC and pi/2 phase shift for the tank circuit.

Cheers,

Steve

 
Posted : 11/12/2021 2:10 am
Nuvistor
(@nuvistor)
Posts: 4652
Famed Member Registered
 

@synchrodyne 

Up to around 1980 I only saw intercarrier sound on UK 625 TV sets, did split sound or quasi split sound appear in later UK versions  whether UK made or imports?

What is “ Quasi split sound” , split sound like the old 405 system I understand, I have not heard of the Quasi version?

 

 

 

Frank

 
Posted : 11/12/2021 8:38 am
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

There is a brief description of the quasi-split sound (QSS) system upthread, in this post: https://www.radios-tv.co.uk/community/black-white-tvs/intercarrier-sound/#post-90611. Scroll down to the second excerpt from the Gosling book.

It is really a variation on the intercarrier technique. The sound carrier is split off at the SAW filter, along with a narrow-band symmetrically filtered vison carrier, which is thus devoid of the PM-causing Nyquist slope. The two carriers are amplified in a separate IC, which also does quasi-synchronous demodulation at vision carrier frequency, thus producing the intercarrier in the normal way. The fact that the vision carrier is relatively “clean”, devoid of Nyquist PM, devoid sideband asymmetry (which itself causes PM) and devoid of its higher sidebands allows the production of a much cleaner intercarrier. Also, with dedicated ICs for the purpose, the quasi-synchronous demodulator tank circuit can be tuned to be 90 degrees away from the vision carrier nominal phase, meaning that the vision AM sidebands are demodulated at close to zero level.

I should imagine that this technique was used for the majority of NICAM receivers in the UK, and perhaps before then for any monophonic sound receivers where the maker was aiming at decent sound quality. The technique dates from c.1980, but I have not yet found an “original” development reference. The Philips TDA2545 was an early QSS IC, but I have not been able to pinpoint its release date or find a technical bulletin about it.

Probably QSS was the simplest and lowest cost way to obtain clean sound demodulation in domestic receivers. Split sound was probably more complex, and was problematical if the incoming signals had been subject to incidental PM. The BBC “true intercarrier” system looks nice, but again was probably more complicated to execute.

Cheers,

Steve

 
Posted : 11/12/2021 8:26 pm
Red_to_Black, Nuvistor, Red_to_Black and 3 people reacted
Nuvistor
(@nuvistor)
Posts: 4652
Famed Member Registered
 

@synchrodyne 

Thanks Steve, I shall have a good read of that over the next few days, quite a lot to take in. I understand the idea of quasi split sound now, I wouldn’t have come across it due to changing careers in 1980.

I did see that early USA TV’s used standard split sound, I saw that a while ago while looking at old circuits.

Frank

 
Posted : 11/12/2021 10:53 pm
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

This commentary on intercarrier receiver performance comes from a 1965 Westinghouse paper on its proposed TV stereo sound system. (1)

“If required, the buzz performance of an intercarrier television receiver can be improved by reducing the phase modulation of the video carrier. This modulation occurs because the video carrier is located on the slope of the overall video i-f characteristic. A separate stage of video i-f amplification and a separate video detector preceding the sound i-f can provide a flat bandpass characteristic for the video carrier. The 4.5 mc sound i-f carrier will then be free of video phase modulation and the output buzz will be reduced for both monaural and stereophonic reception.”

In fact this approach had already been used by GE as early as 1951,, as recorded in an earlier post: https://www.radios-tv.co.uk/community/black-white-tvs/intercarrier-sound/#post-76609.

It may be seen to be generally similar to quasi-split sound in principle, although in detail the latter was an improvement, in that the vision carrier bandpass was quite narrow as well as being symmetrical about the carrier. Also, the quasi-synchronous demodulation would have been cleaner, and when done with a pi/2 phase-shift, would have resulted in minimal demodulation of the vision sidebands.

 

(1) This belonged to what might be called the first era of TV stereo sound proposals in the USA, which dated from c.1959. The EIA convened the National Stereo Radio Committee (NRSC) to evaluate various proposals for FM stereo (11 of), AM stereo (7 of) and TV stereo (4 of) systems, with priority given to FM stereo in recognition of an FCC remit to pursue this. In the event the NRSC work was suspended, and the FCC pursued only the FM stereo case. There was some resurgence of interest in TV stereo in the mid-1960s, in which period the Westinghouse proposal appeared. But again, there was no resultant action, and nothing further was done until the second era of proposals, in the early1980s. Westinghouse had observed that the intercarrier buzz (inherent with the standard arrangement) was noticeably worse with stereo reception as compared with mono, and presumably felt it preferable to offer a remedy.

 

Cheers,

Steve

 
Posted : 30/12/2021 5:09 am
Pieter H
(@pieter-h)
Posts: 18
Eminent Member Registered
 

@synchrodyne 

Hi steve, as to the introduction of QSS in Philips TV sets, see my overview here.

The short summary is:

1976  G11 chassis   TDA2705

1978  K12 chassis   TDA2750 (& TDA2760 amplifier)

1979 K30 & KT3 chassis  TDA2540

1983 K40 & KT4 chassis  TDA2541 video TDA2545 sound, introduction SAW

In general ICs were available roughly 3-4 quarters ahead of the chassis release.

Cheers, Pieter

 
Posted : 30/12/2021 9:16 pm
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

Thanks for that, Pieter. So the use of QSS started earlier than I first thought. Did Philips issue a paper or write a technical journal article about its development?

Also, QSS did not depend upon the availability of special-purpose ICs, such as the TDA2545. Rather, a regular TV IF processing IC could be used for the QSS side, following the IF filter split. For example, an early Plessey circuit showed the TDA440 used for this purpose, alongside a TDA2541 for the main IF strip.

 

Plessey QSS with SW185

 

Presumably Plessey could have used another TDA2541 rather than a TDA440. But both were in its lexicon, and as the TDA 440 had fewer features, e.g. no AFC, it might have been slightly lower cost.

The TDA2545 appears to have been a later addition to the TDA254x series of TV IF processing ICs. The initial members, the TDA2540 and TDA2541 appear to have been released in 1974, being mentioned in Wireless World 1974 July. An interesting feature was that they had noise-gated rather than time (line)-gated (presumably sync tip) AGC, so did not require a line flyback pulse for timing. (The preceding TCA270, which did demodulation, AFC and AGC only, not IF gain, had line-gated, back porch level I think.) My guess is that in part, the choice of noise-gating for the TDA2540/41 AGC generator was to facilitate its use in VCR tuners, where line flyback pulses were less readily available.

The TDA2542 was the positive vision modulation version, for systems C, E, F and L. It had mean-level AGC, but the connection from the demodulator section to the AGC generator section was external, so a conscientious setmaker could have inserted a line gating circuit for black-level AGC. (Although the question comes to mind, why was a sync-cancelled circuit, giving black level AGC but not requiring an external timing pulse, not included in the IC). And the TDA2543 was the AM sound version for systems C, E and F. It had provision for switchable audio de-emphasis; systems C and F had 50 µs pre-emphasis, whereas E and L did not. There was also a switched external audio input. (My guess is that it was probably usable at 455 kHz, so it could also have been the basis for a high-quality AM receiver, with a dual- or triple-bandwidth IF filter ahead of it. The de-emphasis switching facility could have been used to switch a 9/10 kHz adjacent channel notch filter.)

The TDA2544 was like the TDA2540/41, but the RF AGC was configured to suit mosfet RF amplifiers.

Thus we get to the TDA2545, designed specifically to do the QSS job, and so one imagines, offering improved performance as compared with the use of a standard vision IF IC. A specific feature was the 90 degree phase-shift of the reference carrier for the main demodulator, which meant that there was minimal demodulation of the vision carrier sidebands. As the IC included its own AGC circuitry, it required a separate in-phase synchronous demodulator to do this, in this case without a tank circuit. I don’t have a precise release date, but it appears to be from c.1981.

 

TDA2545

 

 

The TDA2546 was basically the TDA2545 with the addition of a 5.5 MHz intercarrier IF amplifier and quadrature demodulator. Although it certainly could be used for stereo, with a separate 5.742 MHz IF processing IC, it might have been aimed more at achieving greater integration for monophonic receiver applications. For stereo, a TDA2545 with say a TBA120S pair might have been more likely.

If the TDA2547 and 2548 existed, I haven’t found anything about them. The TDA2549 was a multistandard vision IF processor, covering both the positive and negative vision modulation cases, and so more-or-less a combination of the TDA2542 and the TDA2544. (It had RF AGC suited to mosfet RF amplifiers). It also had a switched direct video input.

The combination of QSS and AM sound in one IC evidently did not happen until the following IC generation, with the TDA3845. Here the AM sound synchronous demodulator was without a tank circuit, meaning that it was not tied to a specific frequency. AGC was switched between peak for QSS, and mean for AM sound. From that one could infer that the TDA2545 AGC system was of the peak level type.

 

TDA3845

Cheers,

Steve

 
Posted : 31/12/2021 3:06 am
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

Early in this thread, I said:

“Benson & Whitaker also referred to an August 1982 IEEE paper, “New Color TV Receiver with Composite SAW IF Separating the Sound and Picture Signals”, by Yamada and Uematsu of Hitachi as an example of QSS. But this was around a year later than the European developments. The paper focussed on the two-output-port SAW filter; the sample circuit used an HA11440 vision IF IC and a µPC1366C QSS IF IC. I think that the latter might have been a vision IF IC pressed into QSS service, similar to one of Plessey’s approaches, which suggests that at the time, the Japanese industry was yet to produce a dedicated QSS IC.”

Here is the block schematic from that Hitachi paper. The µPC1366C was in fact a standard vision IF processor pressed into QSS service, and not a dedicated IC. It was described as being for monochrome TV applications. I suspect that this was less for performance reasons than for the fact that it lacked AFC bias generator circuitry, AFC not being used for monochrome applications. That lack probably made it an economic choice for QSS, much as did Plessey’s choice of the TDA440.

 

Hitachi QSS

 

Prior Japanese receiver practice with TV stereo sound was mentioned in a 1979 December IEE paper, “Present Status of Multichannel-Sound Television Broadcasting In Japan”, by Yasutaka Numaguchi of the NHK Technical Research Laboratories. One statement therein was:

“This system is designed to obtain good sound quality at inter-carrier sound reception. When split-carrier sound reception is employed, even higher sound quality can be attained, because the reception system is buzz-free, and less crosstalk and distortion.”

From the second sentence, you can infer that when intercarrier was used, the result was not buzz-free.

Another statement from the paper was:

“Theoretical calculations have shown that the signal to noise and buzz ratios in the sub-channel are inferior to those in the main channel by 17 dB and 8 dB respectively. In the actual receivers, however, the differences are smaller than the theoretical values as shown in the table because of residual noise and buzz in the audio channels.”

A not overgenerous interpretation is that in many cases, the intercarrier results were acceptable because some, or perhaps many TV receivers had rather poor sound channels anyway.

A subsequent (1981 August) IEEE paper from the same NHK author was “Multichannel Sound System for Television Broadcasting”. Therein it was said:

“There are two types of receivers for the multichannel sound reception on the market as shown in Fig. 3. Fig. 3a is a multiplex-sound TV receiver with a built-in decoder. Fig. 3b shows a sound tuner which receives the sound carrier by a TV sound tuner. Generally, inter-carrier sound reception is employed for the built-in type and split-carrier sound reception is employed for the tuner-converter type.”

 

Receiver Techniques (NHK)

 

 

Comparative performance numbers were also shown:

 

Receiver Comparative Performance (NHK)

 

 

Buzz at -48 dB in the intercarrier case would not, I think, be a very edifying experience.

More generally, that paper suggests that as of later 1981, QSS had not yet appeared in Japanese practice, so that Hitachi may well have been the first there to propose it.

As already mentioned, another pathway to good sound quality with stereo/multichannel TV audio was the use of PLL-based fully synchronous vision demodulation, which allowed retention of the intercarrier technique. This was adopted by some setmakers in both the USA and Japan, more to follow in a subsequent posting.

Cheers,

Steve

 
Posted : 01/01/2022 1:10 am
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

Fully synchronous demodulation, to achieve better video performance had long been a desideratum for domestic TV receivers, but was not economically realizable until well into the IC era. For example, there was a 1958 IRE paper by RCA “Synchronous and Exalted-Carrier-Detection in Television Receivers”. The exalted-carrier (quasi-synchronous) form was realized in practice by Motorola with its MC1330 IC in 1969, and this set a pattern for the industry over the next few years, with steadily increasing levels of integration. (The progression MC1330 to TBA440 to TCA270 to TDA2540 is indicative.) Less is readily discernible about the history of the PLL-type vision demodulator IC, but RCA was an early participant with its CA3136, in the mid-1970s. This was a vision demodulator with an AFC generator, but not an AGC generator. Thus it was “less functionally integrated” than contemporary ICs with quasi-synchronous demodulators. That may have limited the application of this and any like ICs.

Perhaps the need for improved performance on the sound side, and in particular with stereo and multichannel sound systems prompted another look at PLL synchronous demodulation. When well-executed, this was “clean” enough to allow the use of conventional intercarrier techniques on the sound side, meaning that with the levels of integration reasonably possible by the later 1970s, the PLL approach might be no more complex or costly than the quasi-synchronous/QSS combination.

Zenith and National both evidently thought so, and worked together to develop a suitable IC. Their joint work and its outcome was described in a 1982 August IEEE paper “An Integrated Video IF Amplifier and PLL Detection System”. This paper discussed the available demodulation techniques, namely envelope (modelled as a squaring operation), quasi-synchronous, and PLL synchronous. Amongst the several advantages claimed for the latter were:

“Reduced intercarrier buzz - Phase and amplitude modulation of the audio carrier by the luminance sidebands is reduced due to the linear nature of the synchronous detector.”

“Single video/audio detection – Since the linear detection process produces minimum intermodulation products, requirements for sound carrier trapping in the IF filter are reduced. The same detector can be used for both aural carrier and video information.”

Almost certainly Zenith would have been making sure that this new IC would be suitable for use with multichannel sound receivers, bearing in mind that its own multiplex system was a contender for the BTSC/FCC system selection process – and in fact was eventually chosen, along with DBX companding.

The actual IC was not identified in the paper, but from its description, it corresponded with the National LM1822, which was an IF sub-system, including IF amplifier, PLL demodulator, AGC and AFC generators and video preamplifier. It had an externally line-gated AGC system.

The main production variant appears to have been the LM1823, very similar, but with the addition of internal sync-derived line gating for the AGC. The LM1821 was a demodulator-only version, with an AFC but not an AGC generator. National issued a very comprehensive Application Note on the LM1823, AN391 of 1985 March.

Although the LM1822/3 deployed as a vision IF strip and demodulator could provide a clean intercarrier, there is some evidence, uncorroborated, that National also suggested that it could be used as a QSS IC, and that Zenith might have used it as such. Logically it would have performed well in that role.

In Japan, Matsushita also developed a TV IF processor with PLL synchronous demodulation. This was described in a 1983 August IEEE paper, “Television Receiver Using a Pure Synchronous Detector IC”, by Tetsuo Kutsuki, Kazuhiko Kubo, and Mitsuo Isobe.

The opening statement in the paper was: “The diode detector and the quasi synchronous detector have been used in color television receivers, but they are not good enough to detect IF signal, especially in sound multiplex system (FM-FM in Japan ) or teletext system. The PLL synchronous detector generates video output with high linearity and quality, and rejects quadrature distortions in detected video signal. Accordingly, the development of this system has been expected for a long time.”

Cheers,

Steve

 
Posted : 01/01/2022 7:48 am
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

I suppose that one could propose a hierarchy of TV receiver sound systems in order of the likely sound quality (freedom from any vision interference). Starting from the bottom, a tentative list is:

1. Conventional intercarrier
2. “Improved” intercarrier, in practice encompassing both QSS and the use of PLL synchronous vision demodulation in an otherwise conventional system.
3. “True” intercarrier, in which the sound carrier is extracted ahead of the main IF gain block, then mixed with a “purified” vision carrier (narrow bandwidth and hard limited, or regenerated by PLL) to produce a “clean” intercarrier.
4. Split sound.
5. Completely separate TV sound receiver.

#4 and 5 have the disadvantage that if there is any severe incidental phase modulation (IPM) that similarly affects both the vision and sound carriers, it is not eliminated (as it would be during intercarrier formation), but remains to contaminate the demodulated sound. That this was once a problem to be ignored is shown by the fact that the US version of some of the early “component TV” tuners of the 1980s (e.g. those from Luxman and Sony) provided for (conventional) intercarrier as well as split sound, the former said to be for use with some UHF relay transmitters and some cable sources that did have an IPM problem. Whether the Japanese domestic originals of those TV tuners also had the same facility I do not know, but perhaps not. That problem may have disappeared over time with improved control of IPM. And it may not have been a problem at all in some territories. In the UK, the availability of separate UHF TV sound tuners (Motion Electronics and Lowther) from 1971 and many years thereafter suggests that there was not a significant IPM problem.

That #5 might be better than #4 reflects the fact that some interaction between vision and sound might take place within the front end. Perhaps with early varactor tuners with bipolar devices might have been more at risk, but the 1970s general move to mosfet devices and improved varactors and associated circuits would have minimized the chances of that.

I’d guess that the cost hierarchy would probably parallel the above performance hierarchy. That would explain why #2, particularly the QSS option, was popular where something better than #1 was desired.

Split sound was typically regarded as difficult to implement for TV systems with FM sound, the more so at UHF. Although it was the only option for TV systems with AM sound until the 1990s, when sophisticated intercarrier approaches became available. Local oscillator drift, perceived to be worse at UHF, was a key reason for that viewpoint. With AM sound, the combination of a fairly wideband sound channel and a carrier-frequency insensitive envelope demodulator accommodated what drift there was, with perhaps some small adverse effect on the functioning of the impulse noise suppressors. But FM demodulators were tuned, and required the carrier to be not too far from the centre point. Also, design of stable FM demodulators – at the consumer equipment level at least - became more difficult at higher frequencies. I suspect that the decreasing deviation-to-carrier frequency as the latter increased might have adversely affected the signal-to-noise ratio, as well. In some cases a second sound conversion was used to get to a lower frequency more suitable for FM demodulators.

In the early days with IFs in the 20 MHz range, split sound was doable with AFC from the sound channel, and sometimes was chosen for better sound quality. But the move to higher IFs, in the 30 and 40 MHz range, generally seemed to have killed this option for a while, along with AFC. The latter, by then usually derived from the vision IF, made a reappearance later in the 1950s, often with AFT as its primary function, although AFC was desirable for colour receivers. By the 1970s, split sound for FM would not have been so difficult, as AFC (whether from the vision IF channel or the sound IF channel itself) would have overcome the drift problem. Also, similar frequency stability would have been required for AM sound systems once quasi-synchronous demodulation, with tank circuit, was introduced for AM sound, as with the TDA2543 IC. (In this case AFC was derived from the corresponding vision IF IC, the TDA2542.)

As mentioned upthread, the reintroduction, as it were, of split-sound to domestic TV equipment appears to have happened in Japan during the 1970s, more-or-less corresponding to the introduction of stereo/multichannel sound there, although possibly predating it. The available information (by no means comprehensive) suggests that typical practice was to subject the sound IF to a second conversion, to the regular FM IF of 10.7 MHz. Back in the valve era, the 2nd IF would have had to be chosen with regard to harmonic feedback possibilities, so 10.7 MHz may have been a non-starter. In the solid-state/IC era, better screening was more easily obtainable, so that 10.7 MHz could be used. In some cases, conversion to 10.7 MHz and all processing at that frequency through to demodulation was done in a single IC. Prior to this, some of the Japanese equipment makers had offered standalone TV sound tuners, some of these using a second conversion to 10.7 MHz, so the technique was established. Even so likely it was the more attractive economically to find an intercarrier approach, hence the development in Japan of QSS (Hitachi) and PLL demodulation (Matsushita) techniques.

Cheers,

Steve

 
Posted : 03/01/2022 12:22 am
Nuvistor
(@nuvistor)
Posts: 4652
Famed Member Registered
 

@synchrodyne 
A bit off topic, in the 70’s we sold an audio pickup box that had a screened lead and pickup coil that was placed near the last sound IF amp. This picked up the 6Mhz intercarrier, demodulated it and fed off to a separate audio amp, usually a middle of the road decent audio amp the customer had. Results were very good, no idea what the demod was but probable an IC of some kind, didn’t have one fail so no need to look.

These were sold to CTV set owners which would have had a transistor IF strip. 

 

Frank

 
Posted : 03/01/2022 10:57 am
Katie Bush
(@katie-bush)
Posts: 4859
Member Deactivated Account
 
Posted by: @nuvistor

in the 70’s we sold an audio pickup box that had a screened lead and pickup coil that was placed near the last sound IF amp.

Celestion "Tele-Fi" by any chance?

I only ever saw one, once, but it was being used to drive a headphone amp for an old chap who was hard of hearing. I do believe the sound quality was excellent.

 
Posted : 03/01/2022 2:53 pm
Nuvistor
(@nuvistor)
Posts: 4652
Famed Member Registered
 

 

99BFB89D 69DF 4014 80DA 4FF0ADD2DE1C

@katie-bush 

I don’t remember the name but there were probably not many manufacturers due to it being a niche product.

For some with poor hearing I installed a product, again no name, so much forgotten, but they used a loop around the room of some type.

I must have a search around the web for the Celestion product.

Just found this thread, will try to find an advert.

https://www.radios-tv.co.uk/community/black-white-tvs/practical-television-magazine-july-1960-tv-sound-amplifier/paged/2/

 

 

Frank

 
Posted : 03/01/2022 5:34 pm
Cathovisor
(@cathovisor)
Posts: 6684
Famed Member Registered
 

@nuvistor the Celestion "Tele-fi" adaptor was the subject of a BVWS article some years ago. The other item you're describing is an "induction loop" used to connect to deaf-aids - a system still used today when you see a symbol of an ear with a "T" next to it. They used to be used in the last days of BBC Television Centre for audience shows, no idea if it's still done or they use other methods now.

Good description here: https://soundinduction.co.uk/pages/induction-loop-systems

 
Posted : 03/01/2022 7:07 pm
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

As I recall, the Celestion Telefi was the dominant product in what was the TV intercarrier sound pickup unit category. If nothing else, it was a convenient way to obtain an audio feed (for an audio system or tape recorder) from a TV receiver without making a physical connection.

Clearly it was limited by the quality of the intercarrier generated by the particular TV receiver, but one might expect that the processing of that intercarrier was done at a level equal to about the best TV receivers. The actual source of the radiated intercarrier might vary by TV receiver and by placement of the probe within it. But I suspect that some, perhaps much of it came from the FM demodulator. In that case it was subject to the vagaries of the TV Rx IF strip in terms of bandwidth and symmetry thereof, phase-shift from limiting, etc.

Some TV receivers of the 1950s/60s/70s in the US, Japan and Europe were fitted (sometimes optionally) with audio outputs that were more-or-less direct from the FM demodulator. In those cases, one might expect that reasonable attention had been paid to the circuitry up to that point. In the European case, sometimes audio isolating transformers were used to allow it to be done with non-isolated chassis. As far as I know, fitting an audio output was quite rare in the UK, although Decca did it with the Professional 23. Of course, with isolated chassis – which were the norm in this part of the world – it was usually quite easy to tap off an audio feed oneself, usually across the volume control. As might be expected, results were variable, but could be quite good, although as I recall, usually not completely buzz free.

I don’t seem to have kept any literature for the Celestion Telefi, or if I did, it got caught up in a downsizing/decluttering exercise somewhere along the line. I do have some literature on the Motion Electronics TV sound tuners, and a brief entry on the Lowther TV sound tuner in one of its general brochures. Scans of these were uploaded to the UKHFHS site some time back ( http://ukhhsoc.torrens.org/AudioDocs.html).

Nonetheless, the Celestion Telefi and similar units represent another variation of the intercarrier technique. Such outboard units became somewhat redundant once it became the norm to fit TV receivers with baseband video and audio input and output connections. That seemed to be more of a 1980s exercise, although there were some like that in the later 1970s, e.g. from Barco. Even before that, the availability of baseband ins and outs on many domestic VCRs provided access to an audio feed from the VCR tuners. (I recall that there were some very early domestic VCRs that lacked baseband connections, though – something that at the time I thought was a bit, shall we say, dimwitted.)

Cheers,

Steve

 
Posted : 03/01/2022 9:52 pm
Alex728, Red_to_Black, Alex728 and 3 people reacted
Cathovisor
(@cathovisor)
Posts: 6684
Famed Member Registered
 

In the dim and distant past I was able to pick up TV sound on a set by placing a VHF/FM radio in close proximity to it and picking up what I assume was the 3rd harmonic of the sound IF at 100.5MHz - this came in handy when a relative's set lost sound output. We were able to continue watching as normal.

 
Posted : 03/01/2022 11:24 pm
Sundog
(@sundog)
Posts: 173
Member Deactivated Account
 

My own take on capturing reasonable sound quality from TV happened in the late 60s.

With the advent of low cost transistorised UHF tuners a path was opened. Mostly AF139 versions I seem to remember.

The sound IF frequency output by these was 33.5MHz. By modifying the output tuned circuit and tweaking the oscillator it was easy to output anywhere in the VHF FM broadcast band 88-108MHz.

I built the 4 push button tuners with a 12 volt power supply in a small wooden enclosures for use by friends and local Hi-fi enthusiasts. 

They were a little microphonic but served their purpose well.

 
Posted : 04/01/2022 8:10 pm
Synchrodyne
(@synchrodyne)
Posts: 537
Honorable Member Registered
Topic starter
 

The microphony would have been less apparent in a typical TV receiver. Its effects would have been more-or-less cancelled in the intercarrier formation process, given that the vision carrier was equally affected.

That was in fact an old problem. Early American TV receivers often used the 6J6 double triode valve as a VHF frequency changer. But the 6J6 was somewhat microphonic, which was disadvantageous not only for the split sound TV receivers of the time, but also in FM applications. Thus GE developed the 12AT7 as a non-microphonic TV and FM frequency changer. The advent of triode pentode frequency changers from c.1952 – to better suit the new high IF - largely displaced the double triode in the TV role, but the 6J6 remained for “economy” applications, having been “saved” as it were by the advent of intercarrier sound, for which its microphonic properties were not much of an issue. (Also, during the 1950s, GE developed a version of the 6J6 with much reduced microphony.)

By way of illustration, a Magnavox VHF TV tuner described in Tele-Tech 1951 May and June was fitted with a 6J6 frequency changer as standard. The article closed with the comment: “Another version of the tuner, using a 12AT7 double triode instead of the 6J6 is also in production for use especially in split sound receivers where less oscillator microphonism [sic] can be tolerated.” I suspect that the 6J6, as a fully-amortized WWII valve pressed into civilian service, was being sold at a sufficiently lower price (to the setmakers at least) than the relatively new 12AT7 that it could not be ignored by the TV tuner makers.

Cheers,

Steve

 
Posted : 04/01/2022 10:57 pm
Page 2 / 4
Share: