Featured
Latest
Intercarrier Sound
 
Share:
Notifications
Clear all

Forum 135

Intercarrier Sound

60 Posts
10 Users
40 Reactions
12.5 K Views
Forum 136
(@sundog)
Posts: 173
Reputable Member Registered
 
Posted by: @synchrodyne

The microphony would have been less apparent in a typical TV receiver. Its effects would have been more-or-less cancelled in the intercarrier formation process, given that the vision carrier was equally affected.

 

Indeed, though I didn't appreciate that at the time.

Do you think that the advent of frame-grid valves reduced microphony? Or did inter-carrier sound solve the problem anyway?

 
Posted : 07/01/2022 7:52 pm
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

I think that we may reasonably deduce that the frame grid TV valves, when they arrived, had a minimal microphony problem, as they were also used in FM receivers. But the same was true of prior valves developed for TV applications. That the 6J6, developed in WWII more for radar applications, had a microphony problem only became apparent when it was deployed in TV and FM applications. That evidently spurred GE to develop the 12AT7. And it is reasonable to assume that the valves that RCA developed for “high IF” TV receivers, the 6CB6 pentode, 6BQ7 cascode, and 6X8 triode pentode were similarly free from any deleterious level of microphony. All were used in FM as well as TV receivers. Although the 6X8 was primarily a TV frequency changer, the pentode mixer limiting regeneration caused by the proximity if the IF to the lower Band I channels, it was also specified for used as an FM Frequency changer (with the option of triode-strapping the mixer) and an AM frequency changer (triode pentode mode). That was in fact RCA’s answer to the 12AT7, which as well as its TV and FM roles, was also presented for use as an AM frequency changer in FM-AM receivers. In respect of FM performance at least, that was a seen as a better choice for a combined frequency changer than a pentagrid. RCA had backed the latter approach, positioning the 6BE6 as an FM as well as an AM frequency changer, then introducing the 6SBY7-GT as an improved pentagrid, followed by a noval version of it, the 6BA7. Nonetheless, as an interim measure to meet GE head-on, it did issue an application note showing how the 6J6 could be used as a combined FM-AM frequency changer.

When developing the 6X8, RCA probably assumed that “high IF” TV receivers would almost all be of the intercarrier type, so for the TV application alone, it probably did not have to worry too much about the microphony characteristics of the triode oscillator section. But for the FM case, that was a concern that did need to be addressed. In that it was likely that TV front end valves would be used in FM receivers, even if the latter was not a “headline” application, then microphony probably should not have been ignored. But given that the 6X8 was pitched for FM-AM applications, a bigger market than FM-only, then the issue definitely could not be ignored.

Even so, as you say, the widespread adoption of intercarrier sound did solve the problem of oscillator microphony for TV receivers, and thus allowed continued use of the 6J6 as frequency changer for economy applications.

One frame-grid valve whose primary application was FM receivers was the 6JK8, comprising a frame grid RF amplifier and a conventional triode autodyne mixer, developed for early FM stereo receivers. The US industry was late in adopting single-valve front ends (although that was probably a situation where “better never than late” might have applied), but when it did, the concept was developed along three or four different vectors. Microphony in the RF amplifier was probably less of an issue than in the oscillator, but it does show that there was no fear of frame grids when FM was primary.

Cheers,

Steve

 
Posted : 07/01/2022 9:51 pm
Forum 136
(@sundog)
Posts: 173
Reputable Member Registered
 

@synchrodyne 

I have hardly ever observed objectional microphony with VHF-FM receivers, except under fault condition. Perhaps the equipment manufacturers took care to mechanically isolate the oscillator valve?

Or maybe it's more likely that the higher oscillator frequency at UHF causes a bigger deviation with vibration.

 

 
Posted : 08/01/2022 7:07 pm
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

Returning to the QSS and its origins, this 1982 November IEEE paper provides some background:

“The German 2-Carrier System for Terrestrial TV-Sound Transmission Systems and Integrated Circuits for ‘High-Quality’ TV Receivers”, by Ulf Buhse, Valvo Applications Laboratory, Philips Hamburg.

It started with a discussion of the German (IRT) Zweiton stereo/dual sound system, with a second section on high-quality TV receivers.

Following a brief summary of the shortcomings of the conventional intercarrier sound system, the split sound system was considered then dismissed with the following commentary:

“At first glance, an obvious solution appears to be to split the vision and sound channels immediately after the tuner as shown in Fig. 6.

“The vision and sound channels are then entirely separated and each can be optimised for performing its own function. Theoretically this system would be ideal for hi-fi sound reproduction but, in practice, there are several drawbacks. For example, the system is prone to local-oscillator instability. It needs a special tuner, which is up till now not available.”

That ignored the fact that the Japanese manufacturers were already producing split-sound TV receivers, so had presumably overcome whatever perceived difficulties they presented. Also, in Europe, TV receivers for the French systems E and L were of the split-sound type. One could argue that the oscillator stability requirements for an AM sound channel might be less stringent than for an FM sound channel in that normal diode demodulators were essentially untuned and were insensitive to spurious PM. But that was not the case where the AM sound IF channel included a quasi-synchronous demodulator with a tuned tank circuit, as was the case with the TDA2543 IC.

Be that as it may, the paper moved on to describe the quasi-split sound (QSS) system, including the various requirements to ensure that it worked properly.

One point made was that in the QSS channel, the phase steepness needed to be the same at the vision and sound carrier frequencies, in order that any oscillator jitter produced equal phase jitter for each. If not, any oscillator jitter would result in phase jittering that was different for the sound and vision carriers, this showing up as spurious phase modulation of the sound carriers.

The benefits of quadrature demodulation in the QSS channel were also noted. That is, the limited vision carrier used for the creation of the intercarriers is phase-shifted by ninety degrees, which means that it would not demodulate its own double-sideband AM.

In summary it was said that the quasi split sound system with quadrature demodulation bore the possibility for hi-fi sound reproduction, and that it was a variant of the intercarrier method using the advantages but avoiding the disadvantages.

Following that there was a discussion of actual TV receiver circuitry, using the TDA2545A and TDA2546A QSS ICs, which included the previously discussed features, including quadrature demodulation. The TDA2545A was the basic QSS IC, whilst the TDA2546A added one FM intercarrier channel, nominally at 5.5 MHz.

For a more detailed treatment of QSS theory, the reader was referred to this paper:

H. Achterterg, U. Buhse, H. Schwarz:
Aufbereitung des Fernsehtonsignals mit den integrierten Schaltungen TDA 2545 und TDA 2546 nach dem Quasiparallelton-Verfahren
(Quasi-Split Sound Processing with the Integrated Circuits TDA 2545 and TDA 2546)
Valvo Entwicklungsmitteilung Nr. 79, Nov. 1980

I have not found that paper. But I think that it is reasonably inferred that QSS was “invented” by Valvo, and that the TDA2545 and TDA2546 were the first QSS ICs.

(As an aside, I have some Valvo papers (in English) from about the same period, and they are very comprehensive, multipage documents Without trying to find them, my recollection is that they covered the TDA1072 (AM), TDA1074 (AF) and TDA1028/TDA1029 (AF) ICs. Thus I imagine that the TDA2545/TDA2546 paper would be similarly comprehensive.)

I have had a subsequent thought about the apparent German preference for QSS over split sound. And that is that QSS might be simpler where two sound carriers were involved, as with the Zweiton system. With appropriate care over group delay, etc., both can pass through the same QSS IC without adverse interaction, and both may be demodulated by the same limited and phase-shifted vision carrier to produce two easily separated intercarriers (at 5.5 and 5.742 MHz for system B.)

The Japanese approach to split sound was to convert the (single) sound IF down to 10.7 MHz and then process in standard FM IF ICs, usually with AFC feedback to the 10.7 MHz conversion oscillator. Perhaps it could have been done with two sound carriers (33.4 and 33.158 MHz), converting to 10.7 and 10.942 or 10.458 MHz, but those numbers look to be a little close for separation with standard FM filters in the 10.7 MHz range. A noticeably lower second sound IF would have helped there, but then it would have been non-standard. And if one postulated that to make separation of the two sound carriers sufficiently easy, it would be preferable to move to a second IF around half of 10.7 MHz, that is in intercarrier territory anyway.

Incidentally, in the valve TV receiver era, second sound IFs, where used, had to be chosen carefully in order to avoid interference effects, and evidently 10.7 MHz was a non-starter. With solid state equipment, probably more so in the IC era, economic inter-module isolation was less of a problem, and given adequate screening, 10.7 MHz could be used without problem.

Cheers,

Steve

 
Posted : 02/08/2022 7:05 am
Sundog, Nuvistor, Sundog and 3 people reacted
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

Some further insights into QSS performance were provided in another 1985 IEEE paper:

System And Realization Aspects of Multistandard TV Receivers
Wolfgang Weltersbach
Philips Group of Companies, Hamburg, West Germany

The paper addressed high-performance multistandard receivers, including multichannel sound facilities, with a particular focus on IF strip requirements.

In respect of the sound IF, it was said that split sound would be best from a quality viewpoint, but that it made high demands on the tuner and tuning system, which could not be addressed in an economic way for consumer TV sets of the time. (That assertion was debatable, but it did seem to be the European position at the time.)

That being so, QSS was seen as the optimum system, and the basic multistandard approach was shown in block schematic form:

Basic QSS

Early implementation of QSS was done using quasi-synchronous (passive carrier regeneration) quadrature demodulation, e.g. as with the TDA2545 IC. But improved results were obtained by using active carrier regeneration, using a PLL, as shown in this block schematic:

QSS with PLL

The performance comparison was shown in this chart:

PLL QSS Performance

The active carrier regeneration case performance was shown with and without the tuner, indicating that the latter was responsible for some degeneration. The upper curve, without the tuner, was said to be close to what was achievable with fully split sound. That does illustrate that the use of split sound required a high performance tuner if the latter was not to be the limiting factor. It may also be seen that in the QSS active carrier regeneration case, the tuner was limiting, whereas in the QSS passive carrier regeneration case, it was not. Regular intercarrier performance was not included, probably on the basis that it had no relevance in a discussion of high quality circuitry, but perhaps also because it would not have looked very good.

The multistandard QSS IC with passive carrier regeneration was realized as the TDA3845, although it might have been subsequent to the above-mentioned paper. The earliest mention of that IC that I have found is 1989, but that was from a casual rather than an exhaustive search.

TDA3845

Whether a multistandard QSS with active carrier regeneration was ever developed I do not know. For the paper, the laboratory model was realized using available mixers, VCOs, and opamps.

The question of passive vs. active carrier regeneration for the vision IF channel demodulation function was also considered, with passive chosen basis the following analysis.

“Initially the PLL looks very attractive, it improves the performance of the video signal. However, depending on the application field, some drawbacks have to be taken into account:

- lock-in problems for special picture contents (side band locking)
- threshold for weak IF signals, pull-in problem
- interferences or beat frequencies due to the oscillator.

“With a sophisticated circuit design and special tuning systems these problems may be solved in TV sets. Specially a European multistandard receiver for positive. modulation system (L) adds one severe problem more. The PLL has to handle a zero carrier signal during synch pulses.

“For the time being the European setmakers are sticking to the conventional quasi synchronous detector as far as is known to the author.

“In the US market (standard M) the PLL aspects look more attractive, because the linearity of the demodulator offers advantages for NTSC.”

The final comment connects with an upthread commentary about the Zenith/National Semiconductor LM1822/3 vision IF with fully synchronous demodulation.

Synchronous Vision Demodulation

Interestingly, the quasi-synchronous case is shown with a ±0.75 MHz narrowband filter in the reference channel ahead of the limiter, so that the latter captured only the DSB part of the vision signal. In practice, this was usually (mostly?) omitted, with some reference bandwidth trimming done by the demodulator tank circuit. But this post facto operation would not undo the phase modulation damage done by limiting a sideband asymmetric signal. It was noted that the delay caused by the narrowband filter would need to be matched by a suitable delay in the wider bandwidth signal path, and I imagine that this might have been a stumbling block at the consumer product level. Not mentioned was that if the reference signal had been through the main vision IF filter and so subjected to the Nyquist slope, then there would also need to have been an “anti-Nyquist” filter, perhaps obtained by suitably sloping the top of the ±0.75 MHz filter.

Back to the main theme, we can say that the QSS technique was fairly well defined in terms of key points. Ahead of the main vision IF bandpass filter, the feed from the tuner was split, with a second branch carrying both vision and sound carriers passed through an appropriately shaped double-peaked filter, amplified (with gain control) together, the sound carrier then being synchronously demodulated by the limited (and preferably quadrature phase-shifted) vision carrier, to produce the intercarrier(s).

Against that, any intercarrier-producing technique that used the vision and sound carriers after they had passed through the main vision IF bandpass filter (usually including some relative suppression of the sound carrier) would fall into the regular intercarrier class. That group covered a multitude of sins. Way down in the basement, as it were, would have been the case where the intercarrier was not separated from the video baseband signal until after the video amplifier, as was found in some valve-era receivers. At the top end, and perhaps performance-wise matching QSS with PLL demodulation, was the technique found in some late vision IF processing ICs, such as the Motorola MC44302A multistandard unit, mentioned upthread. This used PLL synchronous demodulation following a common IF filter and gain strip, but separate demodulators, in-phase for vision and quadrature for intercarrier, in the latter case for AM as well as FM sound.

MC44302A

Separate vision and intercarrier demodulators at the end of a common IF strip was not new, and it was found in the valve era where separate diodes were sometimes used. Early in the IC era, the Motorola MC1331, derived from the MC1330 quasi-synchronous vision demodulator, had separate intercarrier demodulator, albeit that it was of the in-phase type.

QSS may be seen as an inspired sidestep at a time when the conventional thinking was that the only way to avoid the limitations of conventional intercarrier sound was to go to split sound. Something it also brought along was a “proper” implementation of quasi-synchronous demodulation, with a carefully conditioned reference carrier. This had certainly been mooted in respect of quasi-synchronous vision demodulation, but as far as I know seldom done in consumer ICs. Rather, improvement was achieved by going to PLL fully synchronous demodulation, which technique was also segued into QSS to take the latter close to the split-sound performance level.

Cheers,

Steve

 
Posted : 03/08/2022 11:39 pm
Nuvistor, Sundog, Nuvistor and 3 people reacted
Forum 136
(@sundog)
Posts: 173
Reputable Member Registered
 

As always, very interesting. I suppose we in the UK should count ourselves lucky that the Beeb developed NICAM.

 

 
Posted : 04/08/2022 8:37 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

Thanks, Sundog.

My impression is that NICAM was somewhat better than either Zweiton or BTSC.

In the past I had “daily” exposure over several years to each of the three, sequentially BTSC in the USA, Zweiton in Australia and then NICAM here in NZ.

That said, the comparisons, being well-spaced sequentially, were hardly rigorous. Although the audio electronics and the speakers were the same throughout, the listening rooms were much different, as also were the “front end” electronics. In the USA I used a Luxman T407 outboard TV tuner, which had split sound. In Australia and NZ I used available reasonably high quality VCRs or DVD recorders as the source. Presumably these had QSS sound channels, but probably not aimed at as a high a target as would be dedicated outboard tuners. Even so, I’d rank NICAM as the best, followed by Zweiton and BTSC.

There was an oddity that cropped up in one Sydney location where we lived for a few years. We were in (or rather on the edge of) a ravine that was just down the hill from the Gore Hill ABC channel 2 transmitter, so nominally in a high signal strength location, although because of the topography, it was not line-of-sight. Stereo TV sound reception could be flakey at times, jumping in and out, but on the other hand, neither mono sound nor vision were at all noisy. Anyway, I cranked up the ICOM R7000 and connected it to the TV aerial, to have a look at signal strengths. Voila! Vision and the 1st sound carrier were good, but 2nd sound was quite low and varying. I guess that the combination of direct (or not quite direct) and reflected waves down in the ravine conspired to produce a relatively high Q dip right over the 2nd sound carrier. As I recall, making a slight change in the aerial pointing direction brought the 2nd sound carrier to a level that was satisfactory most of the time.

In the USA, I was also able to make a comparison between BTSC and FM Multiplex. The local (DFW) PBS TV and FM stations continued the practice of occasional simulcasting for a while after the TV station had started with BTSC. FM Multiplex was definitely better (I was using a Carver TX-11a tuner, fairly well thought of at the time). For example, some caption buzz was apparent on the TV sound, even via the T407 in split-sound mode, and the noise floor was slightly higher. I suspect that there was more compression on the TV sound, as well, whereas the PBS radio station prided itself on using minimal compression.

All anecdotal, of course. The only encounters I had with the Korean version of Zweiton and the Japanese FM/FM TV stereo sound systems were with hotel TV receivers, from which little could be gleaned.

Regarding NICAM reception techniques, as far as I know, some form of intercarrier technique was almost always used, probably quite often with QSS. I imagine that NICAM may have been more robust against the receiver RF/IF path degradations, but perhaps not completely so. That might need a little more light research.

Cheers,

Steve

 
Posted : 04/08/2022 11:01 pm
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
Posted by: @synchrodyne

Whether a multistandard QSS with active carrier regeneration was ever developed I do not know. For the paper, the laboratory model was realized using available mixers, VCOs, and opamps.

In fact Thomson Consumer Electronics R&D Laboratories described a QSS IC with PLL vision carrier regeneration in a 1991 IEEE paper “Advanced Multistandard TV-Sound IF Integrated Circuit”, by M.A. Reiger and R.K. Koblitz.

Here is the block schematic for the IC (whose commercial designation was not provided):

Thomson QSS IC with PLL

This IC also provided for AM sound (for systems L and L’), which was routed through the QSS IF amplifier stage, but demodulated separately. The nature of the AM demodulator was not disclosed, other than that it appeared to be untuned.

The FM intercarriers were available for external further processing, but internal processing was also provided. Another PLL provided a second conversion from 4.5, 5.5, 6.0 or 6.5 MHz to 500 kHz, at which frequency the intercarrier was subjected to integrated bandpass filtering and demodulation, the nature of the latter not being given. The Zweiton 2nd FM carrier at 5.74 MHz was converted to 260 kHz, and similarly processed.

It looks as if the IC was intended for use in multistandard receivers that used a 38.9 MHz vision IF (VIF) for all systems, or at least all systems except L’. Different sound IFs for systems L and L’ would not appear to have been a problem; both could be run through the QSS IF amplifier to the AM demodulator, which as said, appears to have been untuned.

It was also said to be suitable for NICAM sound, with the intercarrier output used to provide the NICAM intercarrier for external processing. A NICAM decoder IC suitable for both the system B,G,H and system I variants was described in another 1991 IEEE paper from SGS-Thomson Microelectronics, “NICAM Decoder for Digital Multichannel TV Sound Broadcast”, by Gary Shipton and Godfrey Onyiagha. However, nothing was said in relation to the “quality” of the NICAM intercarriers delivered to the IC.

In the case of the QSS IC, it is reasonable to assume that the choice of PLL carrier regeneration was made in the interests of obtaining demodulated FM sound quality that was close to that of a split sound system.

Something not mentioned, but perhaps a valid question, is would there have been any problems with using a PLL-type QSS IC and a PLL-type synchronous demodulation VIF IC in the same receiver. That would imply the presence of two same-frequency (say 38.9 MHz) PLLs in close proximity. I suppose that it might have been simply a question of providing adequate screening and perhaps physical separation of the respective ICs. With quasi-synchronous ICs, there would be similar-frequency external tank circuits, quadrature for the QSS IC and one in-phase and one quadrature (AFC) for the VIF IC, and it appears that avoiding any unwanted mutual coupling was quite doable.

Still, one wonders if Motorola’s choice of a combined VIF/SIF in its MC44302A, and yet another approach by Siemens (to be described in following posting) were to some extent an effort to avoid same-frequency PLL duality.

Cheers,

Steve

 
Posted : 06/08/2022 3:32 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

The previously mentioned Siemens approach to maximizing sound quality with the QSS system was described in another 1991 IEEE paper, “A World Standard Video and Sound IF IC”, by R, Heymann and H. Kriedt.

In this, the VIF and SIF had separate amplifiers within the IC, so that before demodulation, it was akin to a split sound receiver. The VIF amplifier was followed by a PLL demodulator. The SIF amplifier output was split into two pathways. One went to an AM demodulator of the untuned quasi-synchronous type, for systems L and L’. The other went to a mixer whose other input was taken from the PLL, essentially a quadrature reconstructed unmodulated vision carrier. The mixer output was thus the intercarrier.

Here is a block schematic:

img1

The sound section is shown in more detail here:

img2

As in the Thomson case, the IC functions include processing of two FM channels through to demodulation, again using another frequency conversion. In this case though the second conversion VCO had a range of 10 to 14 MHz. The final FM IFs were not given, but it was said that the second conversion was not necessarily used on one of the four incoming frequencies (4.5, 5.5, 6.0 and 6.5 MHz). From that one may deduce that any one of the four could be chosen as the final IF, which would thus be in the range 4.5 to 6.5 MHz. If 4.5 MHz were chosen, then the VCO frequencies for the three others would be respectively 10.0, 10.5 and 11.0 MHz. In the 6.5 MHz case, they would be 11.0, 12.0 and 12.5 MHz. The Zweiton second FM carrier would be 242 kHz below the final IF. The two final FM IF filters were external to the IC, as were the tank circuits for the quadrature demodulators. One may suppose that the modal choice was a 5.5 MHz final IF, second carrier at 5.74 MHz, so that standard filters could be used. That would have required VCO frequencies of 10.0, 11.5 and 12.0 MHz for the 4.5, 6.0 and 6.5 MHz cases.

There does not appear to have been a pinout for the raw intercarrier, which suggests that this IC was not intended for use with NICAM. However, that is something that may well have been remedied in the production version.

The vision PLL was actually described as being an FPLL, or frequency phase-locked loop. It had an AFC circuit operating in parallel with the PLL itself, this to allow a wide (5 MHz) capture range but a suitably narrow loop bandwidth. The AFC circuit was switched out when the loop locked. The FPLL was in fact an earlier Siemens development, in part aimed at obtaining some improvement with conventional intercarrier sound. I’ll elaborate in a future posting.

Best intercarrier buzz performance was said to have been achieved with a PLL bandwidth of 15 kHz. But because of the use of inexpensive standard tuners it was necessary to set the bandwidth to 100 kHz to avoid the effects of vision carrier phase modulations within the tuner such as oscillator pulling or phase noise of the varactor diodes. That resulted in a reduced FM sound signal-to-noise ratio. If nothing else, this confirms that the tuner could be a limiting factor.

img3

The quadrature FM demodulators were a departure from normal, in that they interposed an amplification stage between the tank circuit and the multiplier. This was said to allow the use of a lower quality (meaning, I think, lower Q) tank circuit, with the net result of lower distortion.

img4

That was an interesting development, perhaps attending to what might have been a previously neglected area. I suspect that the intercarrier quadrature demodulation ICs used in many TV receivers were similar to the early types, such as the TBA120 series, and were not necessarily optimized for low distortion. In the FM radio field, the progression had been somewhat different after the initial flush of ICs. The RCA CA3089 of 1971 was a step change, and it was intended for use with a double-tuned quadrature coil where very low distortion was required. It set the pattern for subsequent ICs from multiple makers that offered improvements along various vectors. Another important marker was the National LM1965 of c.1984, which inter alia included a feedback circuit that allowed the achievement of two-coil linearity with a single quadrature coil. The Siemens circuit appears to have been a different way of achieving a similar result. Some of the Japanese split-sound TV tuners of the 1980s, in which there was a second conversion to 10.7 MHz in the sound channel, used standard FM IF subsystem ICs.

Cheers,

Steve

 
Posted : 07/08/2022 1:58 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
In the preceding post, I noted “The FPLL was in fact an earlier Siemens development, in part aimed at obtaining some improvement with conventional intercarrier sound. I’ll elaborate in a future posting.”
 
The FPLL was described in a 1983 August IEEE paper, “Modular Video IF Concept”, by Max Huber, Hans Kriedt and Richard Stepp of Siemens Germany.
 
This covered the development of a vision IF gain, demodulation, agc and afc IC that had optional features, including the choice between quasi-synchronous (passive carrier regeneration) and PLL synchronous (active carrier regeneration) demodulation, and was suitable for both negative and positive vision modulation systems.  It was outlined in this diagram:
 
Siemens Modular Vision IF
 
In respect of vision demodulation, Siemens said:
 
“The video detector is a major problem in a video IF chip and can be the source of a variety of disturbances in the video and sound channels.  Here, optimal detector performance is achieved by the use of a double balanced mixer as multiplier.  The modulated IF-carrier input to the mixer is linear.  The key problem is the generation of an exact copy of the unmodulated IF-carrier.  The carrier regeneration can be achieved passively (band pass limiter) or actively (phase locked loop).  Both possibilities have been developed in our modular concept.  The PLL demodulator offers great advantages but also needs a much larger chip area.”
 
In this case, Siemens was clearly looking for improvement over the general run of IC-based quasi-synchronous demodulation systems, which for the most part avoided any bandpass restriction ahead of the limiter in the reference channel.  In this regard it was said:
 
“To regenerate the IF carrier it is necessary to suppress the vestigial-side-band (VSB) portions of the spectrum.  Because of the single-side-band (SSB) contents in the VSB spectrum a selection in front of the limiter is required.  Taking the Q-factor and the necessary detuning range into consideration the double sided bandwidth was set to 1.5 MHz.  Thus the suppression of the SSB contents is only partially effective and results in a 920 kHz colour-sound-beat [presumably referring to the NTSC case] and intercarrier buzz in the audio.  The AM/PM conversion in the limiter creates differential phase error and additional intercarrier buzz. To minimize those disturbances, the detector limiter was constructed as a dual gain amplifier and in addition the mixer was separated from the limiter by an emitter follower.”
 
The Siemens quasi-synchronous demodulator is shown in this diagram, with the limiter channel bandpass filter highlighted, also the concomitantly required delay in the main vision channel.
 
Siemens Quasi Synchronous Demodulator
 
Effectively Siemens was saying that the in general quasi-synchronous demodulator could not made as good as desired because the bandpass ahead of the limiter had to be wide enough to allow reasonably easy tuning of the TV receiver, and that was rather wider than the optimum for best performance.
 
Where better performance was required, the answer was PLL synchronous demodulation.  Here the loop bandwidth could be made sufficiently narrow to ensure carrier recovery without significant sidebands.  Adequate capture range (5 MHz) to enable easy receiver tuning was obtained by using an FPLL rather than a PLL, the difference being shown in this diagram:
 
Siemens PLL and FPLL Tracking Filters
 
As previously mentioned, a loop bandwidth of 15 kHz gave the best sound performance, but 100 kHz was required to allow for the incidental phase modulation produced by “cheap” tuners.  The AFC part of the PLL was said to be similar to a Costas Loop, with three multipliers.
 
The benefit in terms of intercarrier sound performance was interpreted in terms of a reduced need to attenuate the sound carrier in the filter for the combined IF strip, for any given level of colour subcarrier-sound carrier interaction that was deemed to be acceptable.  For the NTSC case, with a 920 kHz subcarrier/sound beat at -45 dB, only 10 dB of sound carrier attenuation was required, as compared with the 26 dB number commonly applied.  That is shown in this chart:
 
Siemens FPLL & Sound Shelf
 
This was said to result in a major improvement of the sound signal-to-noise ratio for weak signals, although that was not quantified.  The implication was that the customary 26 dB attenuation did not help the cause of good sound performance.
 
 
Cheers,
 
Steve P.
 
Posted : 12/08/2022 2:35 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
Earlier than the Siemens work mentioned in the preceding post, AEG had also commented about the effect of quasi-synchronous demodulation execution on the “quality” of the intercarrier so produced.
 
This was in a 1974 IEEE paper, “TV IF Amplifier With Improved Synchronous Detection”, by Franz Buergerhausen.
 
Referring to the conventional implementation of quasi-synchronous demodulation, with wideband regeneration of the switching carrier, it was said:
 
“As a result of the simple wideband regeneration of the switching carrier the basic requirement for a distortionless synchronous demodulation is not fullfilled.  The system dependent sideband-distortions are effecting the phase-modulation of the switching carrier and produces considerable modulation dependent harmonics in the demodulated video-signal."
 
The harmonics of some of the higher video frequencies correspond with the sound carrier.  For example, for system B, the second harmonic of 2.75 MHz.  And the levels of such harmonics can exceed the level of the sound carrier by 3 or 4 dB.  Such harmonics are a source of video buzz on sound.
 
AEG Conventional QSD
 
Another observation made was: “The distortions of the demodulated video-signal have nearly the same character as distortions produced by the envelope demodulator used up to now.”
 
Thus it could be said that although quasi-synchronous demodulation offered some improvement in intercarrier quality, it did not, in its basic form anyway, completely dispose of the problems associated with diode-type envelope demodulators.  The latter produced harmonics of all incoming components, including the demodulated sidebands.  But with sideband symmetry, the upper and lower sideband harmonics cancel each other out, allowing nominally distortionless demodulation.  However, any sideband asymmetry means that the cancellation process is incomplete, and that there will be some in-band distortion, at its maximum for the single sideband case.  The vision sideband spectrum presented to the demodulator is asymmetric throughout, obviously so for the single sideband part, but also for the nominally double sideband part because of the Nyquist slope shaping.  The sound carrier (and its own sidebands) looks like single sideband modulation when viewed from the vision carrier.  Single sideband distortion in envelope demodulation can be reduced by exalting the carrier.  In effect, the usual attenuation of the sound carrier early in the IF strip effectively exalts the vision carrier relative to the sound carrier.  But with diode demodulation, the vision carrier is not exalted for the purposes of vision demodulation, so the single sideband distortion thereof is not reduced.
 
Quasi-synchronous demodulation effectively exalts the vision carrier for both vision demodulation and intercarrier regeneration, so is nominally “cleaner” in respect of harmonic generation.  But its customary implementation with limiting of the wideband, asymmetrical vision signal means that the reference carrier includes some undesirable sideband components that result in distortion components in the demodulated baseband.  The tank circuit reduces, but does not eliminate these undesirable sidebands.  One could say that the outcome typically sits on a line somewhere between diode demodulation at one end, and quasi-synchronous demodulation with a relatively pure reference at the other end.
 
AEG observed that what was needed was narrow-band regeneration of the switching carrier, in particular ensuring that the single sideband part of the vision spectrum be filtered out before limiting.  As Siemens did later on, it saw that there was a conflict between the desired narrowness of the reference channel bandwidth and the ease of receiver tuning.  To obviate this, it proposed a “variable bandwidth” approach.
 
AEG Improved QSD Concept
 
The bandwidth of the filter in the reference circuit was controlled by the AFC system.  When the receiver was on-tune, the bandwidth was narrow, but it was widened out when the AFC bias showed that it was off-tune.  Apparently, this was considered not so easy to implement, so a simpler, compromise approach was developed.
 
That was to include a notch filter in the tank circuit.  The notch was centred somewhere near half the intercarrier frequency away from the vision carrier.  This resulted in a lower level of harmonic production around the sound carrier, as shown in the graph at the bottom of this chart:
 
AEG QSD with Notch Filter
 
And the improvement in sound signal-to-noise ratio is shown here:
 
AEG Sound SNR
 
This approach also conferred benefits for vision demodulation, including in respect of the colour subcarrier.
 
This it may be seen that reasonably early on in the quasi-synchronous vision demodulation era, some effort was made to obtain some of the potential improvement that it offered in respect of intercarrier sound quality.  To what extent such were used in production receivers is unknown.  I suspect that varied by market.  But AEG’s involvement suggested that the German market may have been more demanding where sound quality was concerned.
 
 
Cheers,
 
Steve
 
Posted : 26/08/2022 12:44 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
As noted in the previous post, the introduction of vision quasi-synchronous demodulation (QSD) in a form suitable for use in mass-produced domestic TV receivers provided the opportunity for improved intercarrier sound quality, although that was not always realized to the extent possible.  But to be fair, that was not the primary objective.  Rather, it derived from a desire to integrate TV receiver circuitry.
 
Motorola is understood to have been the first to offer a QSD IC, namely the MC1330, described in the 1969 July IEEE paper “A Monolithic Wideband Synchronous Demodulator Video Detector Color TV”, by Gerald Lunn.
 
This may be seen as having been part of a quest by Motorola to integrate the TV vision IF strip.  Its initial thinking was a three-stage IF strip using three MC1550 ICs with distributed selectivity and a diode demodulator.  The advantages claimed for the MC1550 over discrete bipolar devices were that it had a very low reverse admittance, allowing each interstage filter to be setup without affecting or being affected by the others, and that its input admittance hardly changes with the application of AGC bias.  Apparently this was not well received by the industry.  The next step was an IC that combined the first two IF stages.  This was the MC1350 in plain form, and the MC1352 when also incorporating a gated AGC generator.  This could be used with a discrete transistor 3rd IF stage to drive a diode demodulator, the latter requiring a fairly high drive level to get down to “acceptable” levels of distortion.  But Motorola was looking for a “low level demodulator” that could work with the modest output of the MC1350/MC1352.  The QSD met this requirement, and combined with a post-demodulation video amplifier, effectively replaced the customary third IF and diode demodulator combination.  
 
Motorola presented the basic QSD circuit as including, in the reference channel, selective circuits both before and after the limiter.
 
Motorola QSD with Selective Circuits
 
The selective circuit before the limiter (shown in yellow) could have been used to restrict the bandwidth to the DSB region of the vision carrier, and to cancel the effect of the preceding Nyquist slope filter, thus allowing the generation of a “clean” reference signal.
 
But the MC1330 was not arranged with provision for such a selective circuit.  Only the post-limiting selective circuit was provided for, as the demodulator tank circuit.  This omission was not explained, but presumably that was the result of a benefit/complexity tradeoff.  That circuit was not required to meet the basic target of a functional low-level demodulator, and even with that omission, it was still in some ways better than a diode demodulator in respect of the “quality” of the demodulated video signal.
 
Motorola did though suggest that a notch be included in the tank circuit to minimize the sound-colour subcarrier beat.  That was caused by the presence of a subcarrier sideband in the reference channel, which would beat with the sound carrier in the main channel.  It was tacit admission that a wideband reference circuit, although having the advantage of simplicity, also had drawbacks.  The notch filter presaged AEG’s use of the same for a different purpose, namely improving the sound carrier signal-to-noise/buzz ratio.
 
MC1330 Tank Circuits
 
One of Motorola’s inspirations for the MC1330 was the work done by Sprague in developing its ULN2111 FM quadrature demodulation IC.  This was described in a 1967 November IEEE paper, “A Monolithic Limiter And Balanced Discriminator for FM and TV Receivers”, by A. Bilotti and R. S. Pepper.  This was the first FM quadrature demodulation IC to use a six-transistor tree double balanced multiplier, otherwise known as a full-wave coincidence gate.  (A slightly earlier IC for the same purpose, the Fairchild µA717, had used a very simple three-transistor FM quadrature demodulator.)
 
The closing statement of that paper was:  “The use of full-wave coincidence gates has proven to be a convenient solution for providing the basic functions of an FM detector and is well-matched to the capabilities of monolithic circuitry.  Furthermore, the same gating arrangement can be used as a high-performance synchronous demodulator or as a double-balanced mixer.”
 
Bilotti elaborated on that in a 1968 September IEEE paper “Applications of a Monolithic Analog Multiplier”.  Synchronous AM demodulation was included amongst the several possibilities examined.  Of that it was said:  “The high-gain limiter included in one channel of the monolithic multiplier can be used to remove the amplitude modulation and provide an unmodulated carrier for the synchronous detection.  Fig. 11 shows the waveforms for a 90-percent modulated carrier of 45-MHz and 100-mV signal level, proving the capability of the multiplier to provide distortionless AM detection at low signal levels.  With this kind of carrier recovery technique, faithful detection of the envelope will only occur for the case of a double-sideband AM modulated signal and with the maximum amplitude variations of the available carrier kept within the range defined by the two channel limiting thresholds.”
 
AM Synchronous Demodulation
 
I doubt that it was fortuitous that Bilotti used 45 MHz for his worked example, given the 45.75 MHz American standard TV receiver vision intermediate frequency.  He had flagged the need for having a double-sideband AM signal as input to the limiter for best results; that was disregarded from the start when the technique was applied to TV vision demodulation.
 
Motorola did release an improved version of the MC1330 in 1973, namely the MC1331.  This had detail improvements, such as better linearity and smaller phase errors, but its obvious feature was the use of a separate multiplier for intercarrier generation.  That held the promise of better intercarrier quality.  However, the main reason for doing that, and listed as the first objective for the MC1331 development, was to minimize the colour subcarrier-sound beat (920 kHz for the NTSC case) in the vision channel.  That had been a concern with the MC1330, addressed by using a notch filter in the tank circuit, so this was a step further along that vector.  Thus a sound carrier (41.25 MHz for system M) ) filter could be included ahead of the MC1331, acting as a notch for the vision input (thus disposing of the unwanted beat) and a bandpass for the sound input.  The reference feed to the intercarrier multiplier was exactly the same as that to the vision multiplier, so no “cleaner” than was customary.  But the absence of vision sidebands at the carrier input probably reduced the incidence of harmonics at or around the intercarrier.  That was not quantified, but what was claimed was that the separate sound demodulator avoided the need for the usual tradeoff between the amount of sound carrier suppression and the level of the colour subcarrier-sound beat.  With less suppression of the sound carrier in the IF strip, a better signal-to-noise ratio would have been achieved, as demonstrated by Siemens and noted upthread.
 
Motorola MC1331
 
Be that as it may, evidently the MC1331 did not catch on, and appears to have been discontinued before the MC1330.  In part that might have been because it appeared after ICs such as the TBA440 (Siemens, 1972), which combined the complete IF strip with a quasi-synchronous demodulator and an AFC generator, and so addressed the need for increasing levels of integration.  But the MC1331 could also said to have been “before its time”.  It was described in a 1974 February IEEE paper “A New Video/Sound/Detector IC”, by Milton E. Wilcox of Motorola.
 
 
Cheers,
 
Steve
 
 
Posted : 26/08/2022 12:53 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
The Motorola MC44302 IC has been mentioned previously.  This multifunctional TV IF/demodulator/AGC/AFT subsystem IC probably represented the zenith of the conventional intercarrier technique.  To recap, the vision and sound carriers stayed together through the IF amplifier.  It used PLL synchronous vision demodulation, with a separate quadrature demodulator for intercarrier generation, the latter applying to AM as well as FM sound carriers.  The use of a separate demodulator (multiplier) for intercarrier generation did not disqualify it from the intercarrier group, bearing in mind that back in the valve era, separate diode demodulators were sometimes used for intercarrier generation in intercarrier receivers.
 
Nonetheless, at the end of the IEEE paper (1) about this IC was an interesting commentary, as follows:
 
>>>>>>>>>>
 
Differential Phase and Sound Buzz
 
Even with all the care taken in this design, some residual differential phase still remains.  Although small, it results in an output on the phase detector that modulates the VCO and the sound intercarrier.  This in turn has the potential of degrading the stereo sound performance.  In addition, there is a quadrature differential phase shift that is produced by the shape of the IF bandpass filter.  Both produce currents in the output of the phase detector which in turn phase modulates the VCO.  This phase modulation is imposed on the sound intercarrier resulting in a video related sound buzz.  These currents can be canceled by injecting the correct amplitude and phase of demodulated video into the PLL filter.  This can be accomplished with the addition of the differential phase correction circuit shown in Figure 8.  The phase detector current that is due to the in-phase differential gain is canceled by the resistor current, and the quadrature component that is induced by the IF filter is canceled by the capacitor current.  With proper adjustment, the differential phase distortion can be reduced to less than 0.5 degrees as well as eliminating any perceptible sound buzz.  The source for the demodulated video to be injected into the PLL filter can be obtained from pins 5 or 6.  This must be determined experimentally for a given printed circuit board layout in order to obtain the best results.  With the use of the correction circuit, this system achieves a similar level of performance to that of a parallel sound IF system.
 
MC44304 Differential Phase Correction Circuit
 
>>>>>>>>>>
 
Thus Motorola had found a way to enable intercarrier sound to achieve performance comparable with that provided by the split sound system.  One could say though that heroic means were required to banish intercarrier buzz altogether.  
 
Something not so clear is whether in doing this, the intercarrier advantage (for FM sound) in conditions where there was a high level of incidental phase modulation (IPM) on the sound carrier was lost.
 
To recap, excessive IPM sometimes happened at the transmission end, and it could also be caused in receiver front ends.  Where it equally affected the vision and sound carriers, then the normal intercarrier generation process would result in its elimination, or at least substantial elimination from the sound carrier.  This would apply with diode demodulators, and also with quasi-synchronous demodulators, including those used in QSS ICs.  But with split sound the IPM was not so eliminated, so appeared as noise/buzz on the demodulated FM sound.
 
With PLL synchronous demodulation and intercarrier sound though, it would depend on the PLL bandwidth.  If very narrow, effectively it would reduce or eliminate the IPM on the regenerated vision carrier, which in turn would mean that the same IPM on the sound carrier would not be eliminated during intercarrier generation.
 
In the Motorola MC44302 case, the differential phase correction circuit was intended to remove the last vestiges of PM that was created in the carrier regeneration process.  But it might also have ensured that an incoming IPM on the vision carrier was removed from the regenerated carrier.  If that were the case – and I am not sure that it was - then the intercarrier advantage in adverse conditions would indeed have been lost.
 
As mentioned upthread, split sound was adopted in Japan for high quality receivers for the EIAJ FM-FM stereo system – although there is some evidence of prior use.
 
This was carried over to the high quality TV tuner units that were part of the component video systems introduced in 1981.  The American market versions of those component tuners typically had provision for both split sound and intercarrier sound, with user switching between the two.  The intercarrier sound pathway was usually fairly basic, as per conventional TV receiver practice.  Whether the Japanese domestic versions of those units also had this dual sound facility I do not know.
 
The Sony VTX-1000R was an example of such a TV tuner.  Its operating instructions included the following relative to selection between split and intercarrier sound:
 
>>>>>>>>>>
 
CARRIER selector
 
Normally set to SPLIT.  The split carrier SIF circuit produces high fidelity TV sounds free from buzz noise caused by interference from video signals.  When receiving UHF signals which have been transmitted through several relay stations, setting to INTER may reduce noise.
 
When the ANT/AUX button is set to AUX to receive pay cable TV signals or TV game signals from the AUX terminal, the inter carrier will be selected irrespective of the CARRIER selector setting to reduce the hum which might be caused by a channel converter or other equipment.
 
>>>>>>>>>>
 
Clearly, Sony (and others) expected IPM problems in the USA with UHF relay transmitters, cable TV sources when used via a descrambler, and other sources with inbuilt modulators.
 
The VTX-1000R was tested by “High Fidelity” magazine (2).  In respect of the audio side, the best case weighted signal-to-noise ratios (SNRs) were split sound 64¾ dB and intercarrier 44½ dB, with the worst case being 28 dB split and 31 dB intercarrier.  The worst case was measured with a multiburst video signal, which was acknowledge as being extreme.
 
Either the multiburst pattern generator modulator was putting a lot of IPM on the carriers or the IPM was happening in the tuner, or perhaps it was a combination of both.  And given that intercarrier did not do a lot better than split sound in the worst case, it could have been that the IPM was not the same on the vision and sound carriers.  In general, it looks as if split sound would need to be more than 20 dB down before it was worth switching to intercarrier.
 
The Sony split-sound circuitry is shown in this block schematic:
 
Sony VTX 1000R Split Sound
 
The down conversion from the 45.25 MHz 1st SIF to the 10.7 MHz 2nd SIF was done with a VCO that had AFC control from the 10.7 MHz FM demodulator, thus rendering the split sound channel relatively immune to normal main channel tuning and AFC action shifts.
 
Returning to the Motorola MC44302, if we assume that in its best performance mode, it did lose the intercarrier advantage in adverse conditions, then probably it could be restored by switching out the differential phase correction circuit and possibly also altering the PLL loop bandwidth.
 
With the AM intercarrier, there was probably not a significant IPM issue.  The MC44302 used an untuned quasi-synchronous AM demodulator.  Thus any IPM on the intercarrier would appear equally in the reference and signal inputs to the multiplier, and thus not on the demodulated audio.  A tuned quasi-synchronous demodulator might have been different, although perhaps only with a relatively high tank circuit Q.
 
 
 
(1) An Advanced Multi-Standard TV Vldeo/Sound IF, Mike McGinn and Jade Alberkrack, Motorola Inc., Analog I.C. Division, Tempe, Arizona, IEEE 1994 August.
 
(2) Video Equipment Report, Sony VTX-1000R TV Tuner/Switcher, High Fidelity, 1984 May, p.48ff 
 
 
 
Cheers,
 
 
Steve
 
 
Posted : 07/09/2022 5:32 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
Another approach to improving intercarrier sound quality was developed by Sanyo in the late 1980s.  This was named as the “Super Split PLL” (SSP) system, and viewed as being a derivative of the PLL quasi-split approach.
 
It was described in a 1989 August IEEE paper, “New PIF + SIF IC for Improved Picture and Sound Quality”.
 
Sanyo’s analysis of the then-existing situation was:
 
“Thus, the quasi-split system using the PLL detector may be the best of existing systems.  However, the quasi-split system using the existing PLL detector is not sufficiently effective against buzz and buzzbeat interference to satisfy the demanding requirements for high picture and sound quality in CTVs and VCRs.”
 
It identified three major causes of the buzz and buzzbeat problems, as:
 
a Differential Phase (DP) characteristic of amplifier and detector circuit
b. Detection system differences
c. Television signal transmitted by vestigial sideband system that compresses the band.
 
Item (a) was essentially addressed by appropriate attention to device and circuitry design, and item (b) by the use of PLL demodulation.
 
That left item (c), which PLL demodulation alone could not eliminate.  The problem was illustrated thus:
 
1. Sanyo VSB Phase Distortion
Sanyo addressed this problem by introducing a Nyquist Slope Cancelling (NSC) circuit element between the vision IF amplifier and the input to the PLL:
 
2. Sanyo PLL with NSC
The details of the NSC circuit were as follows:
 
3. Sanyo NSC Circuit
And the resultant phase diagram was:
 
4. Sanyo NSC PLL Phase Diagram
Sanyo provided the following comparative performance chart, indicating that its SSP system was superior to both a single PLL demodulator for vision and intercarrier generation, and to QSS with a PLL for intercarrier generation.
 
5. Sanyo SSP Comparative Performance
Some qualitative observations are warranted:
 
A major objective of the original QSS system was to eliminate the Nyquist slope in the sound channel, so what Sanyo was doing was essentially a different pathway to the same end.  But the Sanyo method also improved vision demodulation, and so picture quality, and that was a co-objective.
 
The original QSS system use quasi-synchronous demodulation, which introduced larger errors, so the use of PLL fully synchronous demodulation, with its better reference carrier-to-noise ratio, was desirable.
 
The simultaneous use of PLL demodulation in both the QSS and VIF pathways, with separate PLLs operating at the same frequency, might have been seen as potentially problematical.  Probably it would have required separate QSS and VIF ICs, and careful component layout to avoid mutual interference.  I am not sure if in fact it had been done before Sanyo developed SSP.
 
Alternatively, using the same PLL as the reference source for both the QSS and vision demodulators, with the PLL fed from the VIF, brought back the Nyquist slope problem.  At first glance, one might wonder if the PLL could not be fed from the QSS channel, where it was without the Nyquist slope.  But adjusting the reference phase so that it was correct for vision demodulation could have been tricky.  On the other hand, although quadrature phase was optimum for QSS demodulation, it was not so critical for what was basically a frequency changing operation.  Thus a single PLL was necessarily driven from the vision channel.
 
On PLL loop bandwidth, National had made some observations in respect of its LM1823 VIF IC with PLL demodulation, of the earlier 1980s.  It was possible to make the loop bandwidth low enough, say around 70 kHz, that the adverse effects of the Nyquist slope were negligible.  But a wider bandwidth, say 500 kHz, was need to deal with incidental carrier phase modulation (ICPM).  If the reference followed the ICPM, then it did not appear in the demodulated vision or the intercarrier; it was detrimental to both.  By inference, the wider bandwidth, although minimizing ICPM effects, allowed in the Nyquist slope PM effects.
 
Sanyo did not provide any information about the PLL bandwidth in its system.  But the NSC circuitry would have allowed setting it wide enough to deal with ICPM without having to worry about Nyquist slope PM.
 
Sanyo’s priority appears to have been obtaining improved performance with multichannel sound systems that used subcarriers, namely the Japanese FM-FM system and the American Zenith-DBX system.  With their much wide modulation bandwidths, these were probably more susceptible to intercarrier buzz than the German Zweiton system, so perhaps benefitted more from demodulation systems with very high reference carrier to noise ratios.
 
The Sanyo IC embodying SPP was the LA7570, shown here with its basic application circuit.
 
6. Sanyo LA7570 Basic Application
The SIF pathway is shown in red.  As may be seen, it was separate from the VIF pathway from a point ahead of the SAW filters, with the only cross-connection being the reference feed from the PLL VCO.  That was probably Sanyo’s rationale for referring to its system as being of the split type, although it was in fact an ersatz intercarrier system.  The IC included a single intercarrier sound channel with quadrature demodulator, but the mixer output was also available for separate processing of the intercarrier(s) where desired.
 
Interestingly, Philips used the LA7570 in some of its TV front ends, such as the FS916E.  (In the Philips lexicon, a TV front end was a combination tuner and IF strip, delivering baseband video and audio, perhaps named as such to differentiate them from tuners proper.)  Philips also used separate VIF and SIF SAW filters, in line with the original Sanyo design.
 
7. Philips DC03 1992 TV Tuners p.42
Philips described front ends using the LA7570 IC as having split sound, evidently to distinguish them from other models that had intercarrier sound or quasi-split sound.  In fact, that was my entry point to this particular rabbit hole.  I was double checking on the IFs that Philips used for system L’ in multistandard receivers, when I happened to notice the “split sound” entry for some of the front ends.  That demanded early attention!
 
8. Philips DC03 1992 TV Tuners p.08
Cheers,
Steve
 
Posted : 27/09/2022 1:07 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
Posted by: @synchrodyne

The simultaneous use of PLL demodulation in both the QSS and VIF pathways, with separate PLLs operating at the same frequency, might have been seen as potentially problematical.  Probably it would have required separate QSS and VIF ICs, and careful component layout to avoid mutual interference.  I am not sure if in fact it had been done before Sanyo developed SSP.

An indication that separate PLLs were in fact used is provided by the existence of the Siemens Matsushita M3271K SAW filter.
 
This SAWF was intended for system M receivers with QSS, and its QSS output contained both the vision (narrow band) and sound carriers, thus conforming to the original QSS SAWF form.  A comment included in the datasheet was:
 
“Phase shift between picture and sound channel optimized for twin PLL ICs.”
 
My take on that is the SAWF was for use in receivers that had a VIF with PLL demodulation, and a QSS type SIF with its own PLL demodulator, possibly with both VIF and SIF within the same IC.
 
Siemens M3271K
 
The phase shift comment is interesting.  One thought there is that the SAWF outputs were arranged so that the VIF vision carrier and the SIF vision carrier were more-or-less in quadrature when they arrived at their respective PLLs.  That way, mutual interference might be minimized.  But of course, it could well have been more complicated than that.
 
As said, the original QSS SAWFs had a double-humped output on the sound side, sending a narrow bandwidth vision carrier as well as the sound carrier to the QSS IC.  But when ICs were introduced that had a single PLL on the VIF side that also provided the feed for QSS demodulation, then QSS SAWFs that provided only the sound carrier output on the sound side became available (*).  There were also separate sound carrier SAWFs for this purpose.
 
(*)  In fact Plessey had done just that in 1980, before the QSS era, with its SW180 SAWF, intended for use with the SL1440 parallel processing IC.  But that idea did not go anywhere at the time, possibly because of the SL1440 itself.
 
Cheers,
Steve
 
Posted : 28/09/2022 3:01 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
Another approach to intercarrier buzz reduction in conventional QSS was shown in the Philips TDA3845 IC of 1993 or thereabouts.
 
Philips TDA3845
 
 
 
Here a notch filter was included in the reference tank circuit in order notch out any residual sound carrier that had remained after the SAWF.  This was done by adding single series capacitor in the tank circuit:
 
TDA3845 Application
 
 
 
The benefits were described thus:
 
“The series capacitor provides a notch at the sound carrier frequency in order to produce more attenuation for the sound carrier in the PC reference channel. The ratio of parallel to series capacitance depends on the ratio of picture to sound carrier frequency which has to be adapted to other TV transmission standards, if required.
 
“The result is an improved ‘intercarrier buzz’ in the stereo system B/G, particularly with 250 kHz video modulation (up to 10 dB improvement in sound Channel 2), or to suppress 350 kHz video modulated beat in the digitally modulated NICAM subcarrier.”
 
One may infer that this quest for improvement was driven in part by the fact that the Zweiton second carrier was apparently more susceptible to buzz than the first carrier.  It would also appear that the NICAM subcarrier was not immune to vision carrier interference in the intercarrier generation process.
 
 
Cheers,
 
Steve
 
Posted : 17/12/2022 12:14 am
Nuvistor reacted
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
 
Upthread (1) I said the following in respect of high-quality component TV tuners that were fitted with both split and intercarrier sound systems:
 
<<<<<<<<<<<<
 
As mentioned upthread, split sound was adopted in Japan for high quality receivers for the EIAJ FM-FM stereo system – although there is some evidence of prior use.
 
This was carried over to the high quality TV tuner units that were part of the component video systems introduced in 1981.  The American market versions of those component tuners typically had provision for both split sound and intercarrier sound, with user switching between the two.  The intercarrier sound pathway was usually fairly basic, as per conventional TV receiver practice.  Whether the Japanese domestic versions of those units also had this dual sound facility I do not know.
 
The Sony VTX-1000R was an example of such a TV tuner.  Its operating instructions included the following relative to selection between split and intercarrier sound:
 
>>>>>>>>>>
 
CARRIER selector
 
Normally set to SPLIT.  The split carrier SIF circuit produces high fidelity TV sounds free from buzz noise caused by interference from video signals.  When receiving UHF signals which have been transmitted through several relay stations, setting to INTER may reduce noise.
 
When the ANT/AUX button is set to AUX to receive pay cable TV signals or TV game signals from the AUX terminal, the inter carrier will be selected irrespective of the CARRIER selector setting to reduce the hum which might be caused by a channel converter or other equipment.
 
>>>>>>>>>>
 
Clearly, Sony (and others) expected IPM problems in the USA with UHF relay transmitters, cable TV sources when used via a descrambler, and other sources with inbuilt modulators.
 
<<<<<<<<<<<<
 
Recently, in developing information on cable TV set-top adaptors and their intermediate frequencies for the “Television Receiver Intermediate Frequencies” thread ( https://www.radios-tv.co.uk/community/black-white-tvs/television-receiver-intermediate-frequencies/), I found an interesting commentary in respect of intercarrier sound in a 1987 IEEE paper about the use of stereo TV sound (American BTSC [Zenith-DBX] system) (2).
 
It is simplest to quote from that paper, with my emphasis:
 
“RF set-top converters should not damage the BTSC signal, so long as an intercarrier detector is employed in the receiver. The oscillators in the set-top terminal tend to introduce phase noise to the picture and sound carriers. This is especially true if the oscillator is phaselocked. In wide bandwidth set-tops covering 50-550 MHz, the first LO tunes from 668 to 1166 MHz, 75methods for % of an octave. With this wide a tuning range and consumer size dollars to spend, the PLLs usually introduce enough phase noise that direct detection of the sound signal would introduce unacceptable noise. The same can be said for many game and VCR modulators. Since the picture and sound carriers are simultaneously passed through the mixer, they will receive identical phase noise. In the TV set the picture and sound carriers are mixed to obtain the 4.5 MHz sound signal in an intercarrier detector. This process removes the phase noise from the 4.5 MHz signal, so quality detection is possible. Intercarrier detection is universal in TV receivers and in VCRs, as well as in many TV tuner/decoder units. However, we are aware of at least one case in which a manufacturer marketed a TV band audio tuner which directly detected the sound carrier. Noise from this device was totally unacceptable when a set-top terminal was used ahead of it.”
 
By the time that this was written, the shortcomings of conventional intercarrier sound, particularly in respect of stereo, had been well articulated, notably so in the seminal Fockens & Eilers (Zenith) IEEE paper of 1981 (3).
 
During the 1980s, the TV receiver industry had been busy developing improved techniques, such as quasi-split sound, as recorded earlier in this thread.  In part this was an attempt to approach split sound quality whilst still retaining the phase noise cancellation properties of intercarrier sound.  Against that background, the quoted piece looks like a defensive position on the part of the CATV industry.  The claim that “quality detection” is possible with intercarrier was certainly something of a stretch.
 
Regarding the final comment in the quoted piece, about the “poor” performance of a TV band audio tuner when connected to a CATV set-top converter.  This might well have referred to the Pioneer TVX-9500 TV sound tuner that was marketed in the USA in the later 1970s.  Such items were more common in Japan, but few seem to have been exported.  This was a double conversion TV sound channel receiver, with 41.25 and 10.7 MHz IFs, and clearly not of the intercarrier type.  The turn of phrase “noise from this device” was quite erroneous, as the demodulated noise was from unwanted phase modulation that was on the sound carrier as it left the set-top converter.  The device merely exposed prior poor performance.
 
Anyway, the key point is that by its own admission, the CATV industry validated the concerns that Sony and others had earlier expressed about the sound quality available from CATV set-top converters.  Very likely there was an economics/performance trade-off that precluded doing much about the problem at the time, at least in respect of mass-market set-top units.  But it might have been better to acknowledge that the conventional intercarrier system was no longer universally used by 1987, and to have avoided the “quality” descriptor in respect its performance.  As an aside, that paper presented an argument for not using double-conversion tuners for off-air reception, at least for higher performance receivers, and I’ll take that over to the TV Intermediate Frequency thread (4).
 
Synthesizer performance (including reduction of phase noise) vs. cost was on a steady improvement curve in the 1980s, so it might not have been too long before the above-mentioned problem effectively disappeared.
 
 
 
(2) IEEE paper 1987 February; “Cable and BTSC Stereo”; James O. Farmer (Scientific-Atlanta, Inc.) & Alex B. Best (Cox Communications, Inc.)
(3) IEEE paper 1981 August; “Intercarrier Buzz Phenomena Analysis and Cures”; Pieter Fockens & Carl G. Eilers (Zenith Radio Corporation)
 
 
 
Cheers,
 
Steve
 
 
Posted : 09/12/2023 9:45 pm
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 

Posted by: @synchrodyne

I have now found and read the 1947 Parker article mentioned in my initial posting, namely: Parker, L.W., TV Intercarrier Sound System, Tele-Tech., 6:26 (October, 1947).

The article was billed as being a relatively simple explanation of the intercarrier technique, and its overall tone was on the optimistic side.

Apparently the technique had been proposed by Parker a few years previously. That would explain why Parker’s work seems to have taken precedence over that of Dome. Also, the “intercarrier” name was applied by the FCC. Previously it was referred to a “difference frequency” system.

 
In fact, Parker had patented the intercarrier system, US2448908 of 1948 September 07, filed 1944 July 13.  Here is the diagram page from that patent:
 
US2448908 19440713 Parker Intercarrier diagram
 
 
The description in the patent is much the same as in the previously mentioned Tele-Tech article.
 
Interesting is that the filing date, mid-1944, preceded the FCC 1945 rearrangement of the US TV channel frequencies, and the concomitant changes to the FM sound channel parameters.
 
The original NTSC standard specified a sound channel deviation of ±75 kHz, with 100 microsecond pre-emphasis, the same as then applied to FM broadcasting in the 40 MHz band.  In 1945, those numbers were changed to ±25 kHz and 75 microseconds.  FM broadcasting in the new 88 to 108 MHz band continued to use ±75 kHz deviation, but with 75 µs pre-emphasis.
 
The argument for reduced deviation in the TV sound case was that it would allow narrower bandwidth receiver sound channels, whilst ensuring that the received sound signal-to-noise ratio was adequate when the signal strength was low enough to be borderline in terms of vision signal-to-noise ratio.  I imagine that the determination was made on the assumption that receivers would use the split sound technique.  Certainly, the reduced deviation would have made worse the problems inherent with intercarrier sound.  The reduced pre-emphasis, with correspondingly reduced de-emphasis in the receiver, would also have reduced the extent to which higher frequency intercarrier artefacts could be minimized.  But overall, there were good reasons for a lower level of pre-emphasis in FM broadcast systems.
 
One could say that the Europeans did better in selecting ±50 kHz deviation and 50 microseconds pre-emphasis for TV sound.  These numbers appear to come from the early Russian work on the 625-line system, circa 1944.  Russia (and the Eastern bloc) also used those parameters for FM sound broadcasting in Band I.  Also, around 1944, the BBC determined that 50 microseconds was the optimum pre-emphasis for FM, rather than the American 100 microseconds.  It reaffirmed that choice after the American change to 75 µs.  For sound broadcasting it found the ±75 kHz deviation close enough to optimum not to warrant any reconsideration.  
 
Notwithstanding the glowing terms in which the intercarrier system was presented, it was not all that long before it was seen as having problems to address.  That is illustrated by an RCA patent, US2901536 of 1959 August 25, filed 1955 May 31, “Intercarrier Sound Buzz Reducing Circuit”.
 
This reduced the buzz caused by field sync pulse by both reducing the magnitude of and modifying the waveform shape of their intrusive effects.  It was done around an intercarrier channel limiting stage, partly by modifying the frequency and time constant characteristics of the input circuit and partly by feedback from the output using a suitable time constant and frequency dependent network, as shown here:
 
US2901536 19550531 RCA Intercarrier Buzz Reduction diagram
 
 
The implication is that post facto correction of buzz was easier than avoidance, suggesting that the problem was inherent, or very close thereto, with intercarrier sound.  To what extent such circuits were actually used is unknown.
 
 
 
Cheers,
 
Steve

 

 

 
Posted : 13/09/2024 4:34 am
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
I should have noted that the quote in the previous post about the Perker article was taken from this post:
 

 

Cheers,

Steve 

 
Posted : 13/09/2024 11:45 pm
Synchrodyne
(@synchrodyne)
Posts: 531
Honorable Member Registered
Topic starter
 
 
Some information on the noise degradation inherent in intercarrier sound as compared with split sound was provided in a 1963 IEEE paper, “Sound signal-to-noise ratio in Intercarrier Sound Television Receivers”, by Jack Avins of RCA.
 
The paper was actually a critique of the FCC’s then recent decision to allow UHF TV transmitters to have a sound carrier power level of 10% to 70% of that of the picture carrier in place of the previous 50% to 70%.  Split sound performance was used to define the ultimate signal-to-noise (SNR) ratio attainable in any situation.
 
This chart from the paper plots the theoretical sound signal to noise ratio (SNR) against picture SNR for two cases of sound carrier to picture carrier ratio, namely 70% and 10%, for a range of signal input levels and for two typical receiver noise factors.
 
Intercarrier SNR
 
 
In the intercarrier case, the SNR at any point is inferior to that obtained with split sound, and varies according to the picture level, being best at sync level, and worst at white level, 16.6 dB down for the better case of sound carrier at 70% of picture carrier level.
 
For a typical receiver, the actual curves were:
 
Intercarrier SNR Typical
 
 
 
RCA also noted the main causes and their pathways for “excess noise” in intercarrier systems:
 
Intercarrier Noise Sources
 
Intercarrier Noise Sources 2
 
 
 
It was noted that synchronous demodulation could solve the noise problem, but at the time (1963), it was seen as too involved and costly for use in domestic receivers.  (The first quasi-synchronous vision demodulator IC, the Motorola MC1330, was announced in 1969.)
 
RCA’s main concern was that a radiated sound carrier power as low as 10% of the vision carrier radiated power would result, in some cases of low received signal strength, in an unsatisfactory sound SNR where the picture was still watchable.  It said:  “In general, field experience with our present system indicates that the margin of safety has been largely used up by the universal adoption of intercarrier sound.”
 
RCA had also noted:
 
“The data which has been presented up to this point is based on a uniform raster, either white, gray, or black, with no picture or synchronizing information. This presents the sound signal-to-noise ratio more favorably than under typical listening and viewing conditions where the presence of sync and video components causes sound interference. For example, the data does not include the effect of intermodulation and distortion of the video components to produce spurious 4.5 Mc/s components; a particular example is the conversion of 2.25 and 1.5 Mc/s video and noise components to 4.5 Mc/s sound interference. The video and sync buzz encountered in pictures having high peak white intensities is another example. These are all the more objectionable because they come and go as a function of picture content.”
 
The foregoing tends to reinforce the notion that intercarrier sound, at least in its initial form with diode type vision demodulation, involved a major tradeoff in sound quality in favour of convenience and cost.
 
The fundamental issues with diode (and diode type) demodulation can be gleaned from a simple qualitative analysis.
 
The diode (and other forms of rectifying AM demodulator) is essentially a squaring device – effectively it multiplies the incoming signal by itself.  Every sinusoid in the incoming signal is multiplied by every other sinusoid.  Thus produced are a multiplicity of intermodulation products.  It happens that if the incoming signal is of the double sideband (DSB) AM type, then all of the unwanted intermodulation products that fall within the desired baseband output range cancel each other out, leaving a “clean” demodulated signal.  However, if there is any asymmetry in the incoming sidebands, then some intermodulation products remain in the output, thus being the source of distortion.  Such distortion is evident, for example, when an AM receiver is off-tuned somewhat, resulting in sideband asymmetry in the signal presented to the diode.
 
A vestigial sideband TV vision signal is definitely not of the symmetrical DSB type.  In the system I case, it is DSB up to 1.25 MHz modulating frequency, and single sideband (SSB) from 1.25 to 5.5 MHz.  Furthermore, the Nyquist slope in the IF strip changes the 0 to 1.25 MHz portion into an asymmetrical DSB form.  The sound carrier at 6.0 MHz may be viewed as an extension of the upper sideband, and so just another desired information frequency within it.  Anyway, it is apparent that absent any sideband symmetry in the signal that arrived at the diode demodulator, the output will contain abundant unwanted intermodulation products in the desired “information” band, extending from near 0 and up to 6 MHz, and inclusive of the intercarrier, and so adding “noise” to the latter.
 
One may also look at intercarrier generation as a mixing process between the vision and sound carriers, with the former acting as a “local oscillator” source.  It is not though, a very “clean” local oscillator, carrying both amplitude modulation and phase modulation (PM).  The latter arises because SSB AM is in fact a mix of AM and PM.  (Any sideband asymmetry in a DSB system also shows up as PM).  This mix of AM and PM is transferred to the intercarrier.  Although the AM can be removed (or at least significantly diminished) by limiting, the PM cannot, and is demodulated as if it were FM.  So, one cannot escape the problem by using a different analysis pathway.
 
With diode demodulators, the distortion caused by sideband asymmetry can be reduced somewhat if the carrier is exalted.  In fact, this was in effect done in typical intercarrier receivers, not by elevating the vision carrier, but by attenuating the sound carrier by around 20 dB early in the IF strip.  As noted by RCA, that in and of itself could have an adverse effect on the SNR.  But even with that, intercarrier sound quality was still markedly inferior to that obtainable with split sound.
 
Synchronous vison demodulation though promised a pathway to improved intercarrier sound.  Here, the incoming signal is multiplied not by itself, but by a reconstructed or reconditioned carrier as a reference.  If nothing else, the effective carrier exultation reduced output distortions.  PM on the reference resulting from the initial sideband asymmetry and the effects of the Nyquist slope limited the improvements that could be obtained with the early forms of synchronous vision demodulation.  The arrival of stereo TV sound forced the issue, though.  Initially, in the late 1970s, Japanese practice for high quality stereo TV sound reception was to revert to split sound.  As already noted in this series, the quest to do better with some form of intercarrier sound was first addressed with QSS in the early 1980s, followed by a raft of other ideas.  Eventually the point was reached where, with the use of special, albeit relatively simple  “adjust on test” circuitry, the point was reached where intercarrier sound quality could match split sound quality.  Not only that, but at this level, intercarrier could be used for AM sound, as well.
 
 
 
Cheers,
 
Steve
 
Posted : 14/09/2024 4:42 am
Page 3 / 3
Share: