World Analogue Television Standards and Waveforms


Specifications of all the analogue television transmission standards defined by CCIR, and waveforms of the 405-, 525-, 625- and 819-line standards

Related sections: | E-mail me | Home Page | 405-Line Standard | Test Cards | Teletext |
World TV section: | Overview | Line Standards | Colour Standards | CCIR Systems | Bands | Radio Channels | UK |
This page: | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |

Page 2: Line StandardsNext Page

THE INFORMATION presented in this section has been compiled from several modern and historical sources and, errors and omissions excepted, the intention is to give a summary of the various standards at the time that they were current. Nevertheless, it is hoped that present-day standards are also accurately accounted for, and to this end any corrections would gratefully received (please E-mail me with any comments). Thanks are especially due to Mark Carver, Steve Palmano and Peter Vince for help and advice. Written sources consulted include [Electronics and] Wireless World and [Practical] Television magazines, textbooks by Benson KB and Whitaker JC, Carnt PS and Townsend GB, Holm WA, Hutson GH, Kerkhov F and Werner W, and technical publications from BBC, EBU, IBA and ITU.

I am particularly indebted to Peter Vince for recently spotting certain anomolies in the ITU document BT.470-6 from which many of the details in these pages were taken. It has been superseded by BT.1700 and BT.1701, and the values quoted in these pages are now verified by those, and by SMPTE 170M-1999 in relation to the NTSC standard. Many of the NTSC parameters feature recurring decimal fractions, and I have indicated these throughout with square brackets, for example fSC = 3 579 545.[45]Hz



World Analogue Television Standards and Waveforms

This page:

Page 2:
Line Standards

Page 3:
Colour Standards

Page 4:

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



SOME SIGNIFICANT dates in the comings and goings of television line standards, colour standards and transmission bands. Starts and ends of official services are in roman type, while other landmarks and experimental services are in italics.


  • 1930 United Kingdom: Start of Baird 30/12.5 experimental television service from the BBC Brookmans Park medium wave radio station in London: vision 356.3m, sound 261.3m (1 Apr 1930)
  • 1933 United States of America: Start of RCA 240/24 experimental television service on 45MHz from station W2XBS on the Empire State Building
  • 1935 Germany: Start of 180/25 public television service by the German Post Office in Berlin (22 Mar 1935)
  • 1935 France: Start of 441/50 television service in vhf Band I from Eiffel Tower in Paris (26 Apr 1935). Closed after a year and reopened as 445/50 (1 Jul 1937). Changed back to 441/50 in 1942 for German Wehrmacht service. Closed in Aug 1944 and re-opened by the French in Oct 1946
  • 1936 United Kingdom: Start of Baird 240/25 and EMI System A 405/50 public television service in vhf Band I by the BBC in London (2 Nov 1936)
  • 1936 United States of America: Start of RCA 343/60 experimental television service on 45MHz from station W2XBS on the Empire State Building
  • 1937 United Kingdom: Closure of Baird 240/25 public television service by the BBC (7 Feb 1937)
  • 1939 United States of America: Start of RMA 441/60 experimental television service


  • 1941 United States of America: Start of System M 525/60 public television service (1 Jul 1941)
  • 1943 Germany: Closure of 441/25 public television service by the German Post Office in Berlin due to allied bombing (Nov 1943)
  • 1949 France: Start of System E 819/50 television service in vhf Band III from Eiffel Tower in Paris (Dec 1949)


  • 1950 West Germany: First test transmissions of System B 625/50 television service from Hamburg on vhf Bands I and III (Jul 1950)
  • 1951 United States of America: Start (and closure after five months) of RCA 441/120 experimental field-sequential colour television service
  • 1952 United States of America: Start of System M 525/60 transmissions in uhf Bands IV and V
  • 1954 United States of America: Start of System M/NTSC 525/60 first-ever public compatible colour television service (1 Jan 1954)
  • 1954 United Kingdom: Demonstration of System A 405/50 NTSC colour by the Marconi company to press in London (May 1954)
  • 1954 United Kingdom: Start of System A 405/50 NTSC colour out-of-hours test transmissions in vhf Band I by the BBC in London (7 Oct 1954, and then regularly from 10 Oct 1955)
  • 1955 United Kingdom: Start of System A 405/50 service in vhf Band III by the ITA (22 Sep 1955)
  • 1956 France: Closure of 441/50 television service in vhf Band III from Eiffel Tower in Paris due to transmitter burning out (Jan 1956)
  • 1957 United Kingdom: Start of System A 405/50 NTSC colour test transmissions in uhf Band IV by the BBC in London (11 Nov 1957 until 1960)


  • 1961 Republic of Ireland: Start of public television service by RTÉ using System A 405/50 and System I 625/50 in vhf Band I (System A: 31 Dec 1961, System I: May 1962)
  • 1962 United Kingdom: Start of System I 625/50 monochrome test transmissions in uhf Band IV by the BBC in London
  • 1963 United Kingdom: Start of System I 625/50 NTSC colour test transmissions in uhf Band IV by the BBC in London (Feb 1963 until 1964)
  • 1963 United Kingdom: Start of System I 625/50 SECAM colour test transmissions in uhf Band IV by the BBC in London (Mar 1963 until 1964)
  • 1963 France: Start of System L 625/50 television service in uhf Band IV from Eiffel Tower in Paris (late 1963)
  • 1964 United Kingdom: Start of System I 625/50 second programme (BBC2) by the BBC in London in uhf Band IV (20 April 1964)
  • 1965 West Germany: First demonstration for press of 625/50 PAL colour in Berlin (Feb 1965)
  • 1965 United Kingdom: Start of regular System I 625/50 PAL colour out-of-hours test transmissions on BBC2 by the BBC in London in uhf Band IV (24 May 1965)
  • 1966 United Kingdom: PAL adopted officially as UK colour system (3 Mar 1966)
  • 1967 United Kingdom: Start of System I/PAL colour test transmissions of scheduled programmes on BBC2 (1 Jul 1967) followed by full service (2 Dec 1967)
  • 1967 West Germany: Start of first official European full colour service on System B/G/PAL (autumn 1967)
  • 1967 France/USSR: Start of full 625/50 SECAM III colour service simultaneously in France and USSR (1 Oct 1967)
  • 1968 France: TF1 first programme duplicated on System L 819/50 on uhf (Low power transmitters - only example of non-625 or -525 services on uhf)
  • 1969 Belgium: Closure of RTB (French language) System E 819/50 service on vhf - replaced by System C 625/50 +ve mod as already used by BRT (Flemish language) service (Mid-Feb 1969)


  • 1971 Luxembourg: Closure of System E 819/50 service on vhf - replaced by System L/SECAM 625/50 +ve mod colour service as used on uhf (1 Sep 1971)
  • 1973 United Kingdom: Start of regular System I 625/50 teletext test transmissions by BBC Ceefax (23 Sep 1973) followed by IBA Oracle
  • 1977 United Kingdom: Experimental System I 625/50 BBC Ceefax and IBA Oracle teletext transmissions declared officially in service by Home Office
  • 1977 Belgium: Closure of System C 625/50 +ve mod service on vhf - replaced by System B/PAL 625/50 -ve mod colour service as used on uhf (25 Apr 1977)


  • 1982 Republic of Ireland: Closure of System A 405/50 vhf services by RTÉ, transmitters being changed one-by-one to System I/PAL during 1978-82 (last one, Letterkenny, Donegal, closed 23 Nov 1982)
  • 1984 France: Closure of TF1 first programme System E/L 819/50 transmissions on vhf/uhf - vhf bands re-engineered for System L/SECAM 625/50 transmissions of new fourth programme Canal Plus
  • 1985 United Kingdom: Closure of last-ever System A 405/50 transmitters by BBC and IBA on vhf, duplicated since November 1969 in System I/PAL on uhf (England, Wales Northern Ireland: 2/3 Jan 1985, Scotland: 3 Jan 1985)
  • 1985 Monaco: Closure of last-ever System E 819/50 transmitter by Télé Monte Carlo in Monaco on F10 - replaced by System L/SECAM 625/50 (mid-1985)


  • 1990 Eastern Europe/Africa/Asia: Start of changeover from SECAM to PAL in many OIRT member countries and elsewhere
  • 1998 United Kingdom: Start of DVB-T digital terrestrial service (15 Nov 1998)


  • 2003 Germany: Start of closedown of analogue terrestrial services to be replaced overnight, region-by-region (beginning with Brandenberg/Berlin), with DVB-T
  • 2008 France: Start of first European High Definition (1920x1080/50i) free-to-air terrestrial digital television service via TNT (Télévision Numérique Terrestre) (30 Nov 2008)


  • 2011 France: Closedown of final transmitters in the French vhf/uhf System L/SECAM 625/50 analogue terrestrial network and also the analogue SECAM satellite transmissions from Atlantic Bird 3 at 5°W (Nov 2011).
  • 2012 Germany: Closedown of final transmitters in the German vhf/uhf System B/G PAL 625/50 analogue terrestrial network and also the analogue PAL satellite transmissions from Astra at 19.2°E (Apr 2012).
  • 2012 Republic of Ireland: Closedown of final transmitters of the Irish uhf System I 625/50 analogue network that opened in May 1962 (23 Oct 2012)
  • 2012 United Kingdom: Closedown of final transmitters (Divis and dependants in Northern Ireland) of the UK uhf System I 625/50 analogue network that opened with Crystal Palace on 20 April 1964 (23 Oct 2012)

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |


Waveform of a line of video
Oscilloscope trace of a line of video containing a staircase pattern (screen shot inset)

ALL ANALOGUE television systems work in the same way - only the precise characteristics are different. The image is dissected in much the same way that you are reading this text. A sampling device scans across from left to right, and from top to bottom, in a series of near-horizontal lines that are arranged in a rectangle to form a raster. The output from the sampling device comprises a constantly changing voltage which at any moment represents the brightness of a given point in the image.

This video stream is punctuated at the start of each scanning line by a line synchronising pulse, and at the start of each frame, or field, by a series of field synchronising pulses. The sync pulses are separated from the picture information in both time and amplitude. In a standard 1Vp-p (into 75 ohms) 'composite' (mixed syncs plus picture) video signal, the peak white amplitude is +700mV, whilst black and blanking levels sit at 0V and the sync tips reach -300mV. In the 525-line standard however, peak white is 714mV and sync level is -286mV. In addition, in the US (but not the Japanese) version black level is 54mV above blanking level.

The most basic difference between television standards is the number of lines per field and the number of fields per second, as determined by the line and field scanning frequencies. Details may be found in the line standards section of this page.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



A HUNDRED years ago or more, the moving picture industry satisfied itself that the persistence of vision effect in the human eye and brain was such that by projecting a series of still images at the rate of sixteen or eighteen per second, an illusion of smooth movement was created. A few year later, with the introduction of synchronised sound, the projection rate was increased, and standardised at twenty-four pictures per second, partly to give smoother lip movement, and partly to increase the writing speed available for the optical sound-track. However, sixteen, or even twenty-four, complete blackouts per second between pictures creates too much flicker for the human brain to tolerate. For that reason a shutter is placed in the light beam of the projector that interrupts it forty-eight or seventy-two times a second, giving a flicker-free impression of smooth movement.

It would be a simple matter to incorporate such a scheme into a modern television system. All it would require is that each frame, as it is received, is written into a digital field store (an integral part of every digital tv receiver) and read out again two, three, or even four times at a much faster rate.

Unfortunately, digital field stores were not available in the nineteen-thirties when analogue television was being developed. Instead, having determined that at least 24 complete frames per second, each with at least 240 scanning lines, were required to provide a watchable picture, the designers came up with an ingenious method of reducing the flicker rate. Using an odd number of lines per picture they simply doubled the field frequency, whilst keeping the line frequency the same. So, in the 405-line standard, the scanning beam reads 202.5 lines over the complete height of the image, then half-way through line 203 it jumps back to the top of the image to read the 202.5 lines that lie in between the first set.

In this way there is still a full 405 lines per picture (though in practice, because the video signal is time-shared between picture information and synchronisation pulses, there are actually only 377 lines of picture per frame, not 405), but there are now fifty fields per second projected onto the tv screen to reduce flicker. As a bonus, because of the way the television camera works, each picture line contains information from the last 1/25 second period since it was last scanned, and 1/50 second earlier than that for the lines immediately above and below, and so this 2:1 interlaced system of alternate staggered fields has the effect of smoothing out movement even further.

Interlaced scanning - he called it 'intermeshed' - was first proposed by Randall C Ballard in an RCA patent of 19 July 1932. In it he described a system using a Nipkov disc having 81 holes.

Ironically, now that receivers with digital field stores are with us and it is possible to increase the displayed picture repetition rate beyond fifty per second, the presence of 2:1 interlace causes huge problems for tv set designers and creates nasty motion artefacts on the screen, which are diffcult to eliminate.

All 'standard definition' (ie between around 400 and 900 lines per picture) analogue television systems incorporate 2:1 interlace.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |

Aspect ratio


THE ASPECT ratio of a picture is its width divided by it height and is often expressed as the ratio of two integers. The original aspect ratio of the 405-line standard was 5:4, but it was later changed to 4:3 to be the same as the so-called 'Academy Ratio' of 35mm cinema films. More recently, with the advent of digital transmissions, a second change has been made, to 16:9, and this 'widescreen' format is running side-by-side with 4:3 in many countries. Both 4:3 and 16:9 pictures use exactly the same portion of the transmitted signal. The latter format is sometimes called 'anamorphic' by analogy with the cinema format in which a cylindrical lens is used to squash a wide picture 'anamorphically' into a standard Academy Ratio frame at the expense of reduced horizontal resolution.

Other aspect ratios may be created within 4:3 or 16:9 frames by effectively widening the horizontal or vertical blanking periods in order to matte down the visible picture to the required shape. British broadcasters frequently use this technique to present a 14:9 version of a 16:9 picture on analogue transmissions. Since these extra pseudo-blanking periods are really part of the active picture it is possible to include graphics in them. In particular sports and light entertainment producers like to add swirling coloured 'curtains' to 4:3 or 14:9 segments of their widescreen shows.

There is more about aspect ratios in 'Not just a pretty face...'

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



RESOLUTION, OR 'definition' is a measure of the fineness of detail that can be seen in a picture. In photography, the resolution is generally the same horizontally and vertically, but in television the two are separate, though interdependent. Vertical resolution is determined by the number of scanning lines in the picture, and horizontal resolution by the video bandwidth available. In most line standards the two are made to appear equal to the eye, though there is some disagreement about what constitutes equality. One solution has been to apply a 'Kell factor' in calculations to determine resolution.

There is more about resolution on the page 'Not just a pretty face...' in the Test Cards section of this web site.

In colour television systems the chrominance (hue and saturation) resolution is generally much lower than that of the luminance (black and white information). This is discussed in the chapter on Colour below.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



DISPLAY TUBES are not linear devices (though camera pick-up tubes tend to be). Typical cathode ray tubes produce a light output that is proportional to the driving voltage raised to the power of 2.8 ±0.3, known as the 'transfer characteristic' or 'gamma factor'. This is due mainly to the triode transfer characteristics of the display tube.

To correct for this non-linearity, the video signal should require an exponent of 1/2.8 or 0.357, but it has been found that an overall system gamma of unity renders monochrome pictures which appear flat and lacking in contrast. A value for gamma correction of 0.45, giving an overall system gamma of 1.26, has been chosen in non-NTSC countries for monochrome transmissions.

However, the various equations used to matrix the colour signals require an overall system gamma of unity in order to yield correct colorimetry, so when using these equations a precise transfer characteristic of 1/0.4545 = 2.2 is assumed, and a gamma correction value 0.4545 is appled to all colour standards, despite the fact that a display transfer characteristic of 2.8 is still assumed for 625-line colour standards.

Values of 2.2 for transfer characteristic and 0.4545 for gamma correction are used in the NTSC countries, leading to a system gamma of unity. For computer displays, Windows assumes a transfer characteristic of 2.2 while Macintosh uses 1.8, leading to a low overall gamma of 0.82, in which low luminance levels are rendered brighter.

It has always been the practice to perform this gamma correction in the camera (or the colour encoder) in order to reduce the complexity of receiver video circuitry, and to reduce the effects on dark parts of the picture of noise accumulated in the transmission system.

Signals that have been gamma corrected should properly be written with a prime mark ('), for example Y', R'G'B' (or, when referring to the voltages: E'Y, E'RE'GE'B). However, since most signals are gamma-corrected I have left out the prime marks in general to avoid cluttering up the text. They are included in some of the equations in order to clarify which values are gamma-corrected and which are not.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



HAVING ESTABLISHED workable monochrome television systems the designers turned their minds to colour.

Original colour scene

Colour picture

Gamma correction

Gamma correction in colour television theory is a thorny subject. It leads to errors in the decoded signal depending on where it is applied, and muddies the waters where monochrome compatibility is concerned. Generally the R, G and B colour separation signals have gamma correction applied at an early stage and the gamma corrected luminance signal Y' is derived from them. But other ways of doing it are possible. In this discussion I have mainly ignored gamma and left the prime marks (the ticks - ' - that indicate that a signal has been gamma corrected) out of the equations.

To reproduce a colour scene requires the image to be sampled separately in the three additive primary colours red, green and blue (R, G and B). In colour photography, printing and computing, it is usually these three colour separations, or their subtractive counterparts (C - cyan, M - magenta, Y - yellow and K - black, or key), which are stored, manipulated and displayed. However, the legacy of the monochrome transmitters and receivers all over the world, together with the huge amount of frequency spectrum that would have been required, meant that a different approach was needed for colour television.

At this stage of the process the levels are normalised such that for peak white, R = G = B = 1 [1], and gamma correction is applied to the three colour separation signals as it is these that will be used to drive the cathodes of the three colour display tube guns.

[1] See the section on Component video levels below for the actual voltages used.

Colour separation signals

Colour separation Red Colour separation Green Colour separation Blue

It was recognised that any colour system should be compatible in both directions - ie no change should have to be made to monochrome receivers, and colour sets should display monochrome transmissions correctly and automatically. The black-and-white picture was therefore redefined as 'luminance' (Y - not to be confused with the yellow component of CMYK colour space) and is synthesised by adding together the three separate colour separation signals in the proportions Y = 0.299R + 0.587G + 0.114B, these values having been determined to produce a compatible 'panchromatic' display on a monochrome receiver. Again, for peak white, where R = G = B = 1, Y = 0.299 + 0.587 + 0.114 = 1.

Note that a different set of coefficients for the matrixing equations is used for high definition signals. (See the summary of equations section on the Colour Standards page.)

Luminance signal


This luminance signal is transmitted in exactly the same way as the old black-and-white signal.

Now that some of the colour information is effectively coded in the luminance signal, it is only necessary to transmit two further signals in order to be able to obtain the separate R, G and B signals in the receiver. The method that has been universally adopted is to matrix the R and B signals with the Y signal and transmit (R-Y) and (B-Y), where (R-Y) = 0.701R - 0.587G - 0.114B, and (B-Y) = - 0.299R - 0.587G + 0.886B. This has the huge advantage that in the case of a monochrome picture, or areas of grey in a colour picture, the colour values are such that Y = R = G = B and therefore (R-Y) = (B-Y) = (G-Y) = 0. In other words these 'colour difference' signals vanish when there is no colour information, improving compatibility and reverse compatibility (since a colour receiver, seeing a monochrome transmission with no colour-difference signal present, will automatically display a black-and-white picture).

Colour difference signals

Colour difference Red Colour difference Green Colour difference Blue
(R-Y) (G-Y) (B-Y)

Being 'difference' signals, the (R-Y), (B-Y) and (G-Y) voltages, unlike other video signals, can be negative as well as positive. For the purpose of these illustrations I have added a mid-grey pedestal so that the 'negative' excursions are visible. The grey areas represent zero colour difference voltage (ie colourless areas of the picture) and because the eye is sensitive to very small changes in the 'colour temperature' of neutral shades, the three signals have to be very accurately clamped to 0V in the coder and decoder circuitry.

The (R-Y) and (B-Y) signals were chosen for transmission because they have larger maximum voltage excursions than the (G-Y) signal, which is therefore recoverable in the receiver by attenuating, rather than amplifying, the other two. This has advantages in terms of signal-to-noise ratio as well as decoder complexity. The green colour difference signal is given by (G-Y) = - 0.509(R-Y) - 0.194(B-Y), and in early colour receivers the addition of the Y signal ('matrixing') to recreate the R, G and B signals, was done within the crt display tube, thus economising further on valves.

Colour difference displays

Colour difference Red Colour difference Green Colour difference Blue Colour difference
(R-Y) (G-Y) (B-Y) (R-Y), (G-Y), (B-Y)

The four pictures above show the displays obtained when the colour difference signals are applied individually and in combination, in the absence of the luminance signal, to the matrix circuitry from which the final R, G and B signals are extracted. When the luminance signal is also applied to the matrix the original colour separation signals are obtained as shown below.

Colour separation displays

Colour separation Red Colour separation Green Colour separation Blue Colour picture

These final four pictures show the displays obtained when the colour separation signals are applied individually and in combination to the display device.

The above process is essentially the same for all analogue and digital colour television systems.

Encoding the colour signal

The next problem was to accommodate this three-fold increase in information without increasing the bandwidth of the transmitted signal. Two phenomena - one physical and one physiological - allowed this to happen.

Colour vision acuity, it was discovered, is different from that of brightness-only vision. In fact, if sufficient detail is available in the brightness of a scene, the detail in the colours can be reduced considerably with no apparent reduction in the sharpness of the scene. This allows the bandwidth of the colour difference signals to be reduced to half, or less, of that required for the luminance channel. This fact is further exploited in the NTSC system, where the colours to which the eye is least sensitive in terms of detail are assigned a narrower bandwidth, by transmitting I (orange-cyan) and Q (green-magenta) signals instead of (R-Y) and (B-Y) and allowing the Q signal only half the bandwidth of the I signal.

Secondly, the video waveform does not have a continuous frequency spectrum, like that of sound. Because of the way the picture is scanned, the energy in the spectrum is bunched around multiples of the line and field scanning frequencies, with little energy in the gaps. By shifting the colour-difference signals in frequency (by modulating them onto a subcarrier) it is possible to make the peaks in colour energy fall in the gaps in the luminance energy, thus allowing the signals to be separated at the receiver by means of a 'comb filter'. The subcarrier frequency must be high enough that the dot-matrix pattern created on a black-and-white receiver is not too coarse, yet must be low enough that the upper chrominance sideband fits within the vision bandwidth of the transmitted signal without attenuation or distortion. The precise value of the subcarrier frequency is then determined by adding fractions of the line and field scanning frequencies in order to create a dot pattern that is not distracting. A description of how the subcarrier frequencies are determined is on the Colour Standards page.

Thus the three channels of colour information may be fitted into the same bandwidth as existing black-and-white video signals, whilst maintaining both forward and reverse compatibility. The differences between the three main colour standards occur in the way the colour difference signals are modulated on the subcarrier, and the precise frequency of the subcarrier (which depends to a large extent on the line and field frequencies and the bandwidth of the transmitted signal).

Full details of the main colour standards that have been used around the world are on the Colour Standards page.

Causes of loss of resolution in the decoded colour signal

Although for most picture content the reduction in chrominance bandwidth is perfectly acceptable, in certain circumstances it can lead to unwanted visible effects, especially at the boundaries between saturated colours or where brightness detail occurs in areas of saturated colour. Captions, for example, in red or blue appear smeared and fuzzy. Also scenes illuminated by light of a primary colour appear noisy and blurred.

Multiburst waveform

Here is a waveform of a multiburst test signal. It ranges from black (0%) to white (100%) with a mean level of 50%. In a digital transmission all the gratings up to 6MHz will be visible on the screen. In an analogue 625-line signal the 6MHz grating will appear plain grey, as will the 5MHz grating in a 525-line signal, because they are beyond the frequency response of the luminance channel. However, the response of the chrominance channels is much less - around 1MHz for PAL, SECAM and NTSC, and around 3MHz for digital.

Red Multiburst RGB waveform

Let us see what happens if we transmit a multiburst entirely in one primary colour. The diagrams that follow show one grating that is within the chrominance passband followed by one that is outside it. The signal is applied to the red channel only, so the green and blue channels are at black level (0%).

Red Multiburst component waveform

The luminance and the three colour-difference channels are shown here. Adding together the luminance and each of the colour difference signals in turn will give the same R, G and B signals as above.

Red Multiburst filtered component waveform

However, the chrominance channels are low-pass filtered. The lower frequency grating remains unchanged, but the higher one becomes a straight line having the same mean level as the grating.

Red Multiburst decoded RGB waveform

When the luminance signal is added to each colour-difference signal, the amplitude of the high-frequency grating is reduced in the active channel, and phantom gratings appear in the other two channels, reducing the saturation (and fortuitously increasing the brightness slightly) of the details. The red and green channels contain negative-going information which will either be clipped by the display circuitry or cut off in the cathode ray tube.

The loss of luminance resolution is less pronounced in areas of saturated green and more pronounced in saturated blue because of the relative contributions made by the colour separation signals to the luminance signal.

 Grey Multiburst RGB

This sequence of screenshots shows the two sections of multiburst as a black-and-white signal...

Red Multiburst RGB a signal in the red channel only as displayed on an RGB monitor before encoding...

Red Multiburst PAL

..and as displayed after decoding.

This effect is increased by the way that gamma correction is applied in generating the luminance signal. As discussed above, the luminance signal is derived from the three colour separation signals using the following relationship:
Y = 0.299R + 0.587G + 0.114B
in which the uncorrected luminance signal is obtained by summing the uncorrected colour separation signals. The gamma corrected luminance signal Y' is then obtained by applying gamma correction to Y.

However, in terms of practical circuitry, it is less complicated to derive Y' by summing R', B' and B' as follows:
Y' = 0.299R' + 0.587G' + 0.114B'
which unfortunately has the result that more of the high luminance frequencies are transferred from the Y' signal to the (R'-Y') and (B'-Y') signals after matrixing. These are then filtered out by the coder and lost, resulting in yet more reduction of the resolution.

An additional effect is produced because many television cameras and telecine units employ a technique called 'contours out of green' in which the the waveforms used for horizontal and vertical aperture correction are derived not from the luminance signal obtained by matrixing the three colour channels, but from the green channel alone. Aperture correction is a technique used to compensate for the fact that when scanning is employed, the sampling device - the electron beam in a camera tube, the spot of light in flying-spot telecine or the 'pixel' in a charge coupled device - is of finite size, which limits the resolution of the device in a predictable and correctable manner. The addition of a 'crispening' signal to the three colour separation signals can overcome this deficiency, and deriving it from the green channel alone improves the sharpness of the picture, since fine detail in all three channels might not be perfectly coincident due to poor registration.

Since most scenes contain high levels of green, there are usually no unwanted effects, but when the pictures contain large amounts of saturated red or blue detail, as in the case of red captions on a film, or 'disco' scenes, the pictures appear blurred because the green channel is contributing little to the aperture correction circuitry - other than a high level of undesired noise - and so the colour separation signals contain only the 'unsharpened' video information. Unlike the effect described above, this effect is also seen on monochrome receivers, where pictures with low green content lack high-frequency definition. Not all cameras use 'contours out of green'. Indeed Sony developed one model in the late 1970s that used 'contours out of red' specifically for photographing surgical operations.

A further problem is that with scenes illuminated by a primary colour limiting can occur in the channel of that colour, resulting in no detail at all being visible in the clipped areas. Again, this also affects monochrome displays and leads to odd looking pictures because the clipping occurs at low luminance values rather than peak white.

Developments in technology, especially the area of home video recording, have led to improved, though non-compatible (with old B&W receivers), ways of delivering colour signals at baseband rather than radio frequency. Where the standard coded colour signal ('CVBS' - Colour, Video, Blanking, Syncs) carries luminance and chrominance within a single circuit 'S-Video' ('Separate video' - not to be confused with the tape format 'SVHS' - Super VHS, where recorders often incorporate S-Video inputs and outputs) carries the luminance in one circuit and the coded chrominance in a second, improving the bandwidth available to each whilst eliminating crosstalk. With digital systems the luminance and two colour difference signals are encoded separately, and so digital decoders can be made to generate RGB, YPbPr (a version of Y(B-Y)(R-Y) - see Standard Video Levels below), S-Video or CVBS depending on the capabilities of the display device.

The colorimetry - that is the precise colours used for the primaries red, green and blue, and also 'white' - differs from standard to standard, and has also changed over the years. These are described and discussed in the colour standards section of this page.

Monochrome compatibility and reverse compatibility

It has been stated that analogue colour television systems should exhibit both compatibility and reverse compatibility - that is a monochrome transmission received on a colour set should be displayed in black and white, and a colour transmission received on a monochrome set should display a picture that is indistinguishable from a monochrome one. The reason for this is that both have to share the same transmitted signals. The same is not true with digital transmissions, because no monochrome analogue receiver will be able to display them directly, but a standard or high definition digital decoder should still be able to send a compatible analogue signal to a monochrome receiver or display.

But how far does compatibility go?

An example of vanishing colour An example of vanishing colour (Luma only)

The word 'COLOUR' may appear real enough in the caption on the left, but when viewed on a greyscale computer monitor, or a colour telly with the colour turned all the way down, as on the right, it will disappear, since it has been arranged that all the information pertaining to it appears in the colour difference channels and none in the luminance channel.

It is a nice little party trick, but when the same caption is displayed as a coded PAL or NTSC video signal on a monochrome crt display, as in the off-screen shot below, the word 'COLOUR' is clearly visible. Why is this, and is it a bad thing?

Vanishing colour displayed in monochrome

The effect is purely to do with the colour subcarrier, which is present in coloured, but not neutral, areas, on the coded signal. Areas of saturated colour are not displayed on a monochrome screen as lines of constant grey, as areas of neutral colour are. Instead the peaks and troughs of the subcarrier appear as tiny light and dark dots - rather like the dots that make up a newspaper photograph - and the brightness of these dots is related to the luminance of the coloured area. The eye sees the mean value of the dots and interprets it as solid grey.

However, for several reasons the grey the eye sees is brighter than the electrical mean level of the subcarrier. Here, gamma correction raises its head. The crt has a non-linear transfer characteristic that means that as the video signal increases, the light emitted by the screen increases to an enhanced degree (to the power of gamma - around 2.2 - in fact). In the camera, the video signal is corrected in the opposite direction to compensate for this, but the subcarrier is added after this so-called gamma correction. The light and dark dots therefore appear brighter than they should, and the eye sees a brighter mean level.

Moreover, the darker saturated colours (magenta, red and blue in colour bars) have subcarrier that descends below black level. On a crt the light dots are displayed as normal, but the associated dark dots appear as a uniform black. The optical mean level of the dots is therefore even higher for these darker colours.

So, is this a bad thing? Is it a breakdown of the compatibilty requirement? Well, no and yes. In fact that pesky gamma correction has already contributed to an unwanted darkening of the luminance in areas of saturated colour that would be displayed if the subcarrier were to be filtered out. In a colour decoder this effect is corrected when the luminance and colour difference signals are matrixed to recover the red, green and blue signals, but in a monochrome receiver no such action can take place, so the optical averaging effect restores some brightness to saturated colours (but it does not quite bring them up to the correct values).

Conversely the subcarrier must be filtered out in a colour display, otherwise saturated colours would appear too bright and desaturated.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |

Standard component video levels



The 525-line standard composite waveform is subtly different from all others. P-P voltage is still 1V, and blanking level is at 0V, but the blanking level to peak white amplitude (+714mV) is divided into 100 so-called IRE units and the sync tip amplitude is -40 IRE units (-286mV). Black level may either be at 0 IRE (0V) or on an optional pedestal of +7.5 IRE (+53.55mV). The above applies only to the composite NTSC signal and not to the component signals (including composite luminance).

The NTSC subcarriers are not modulated with the U and V components directly. Instead they are projected onto the I and Q axes which lead U and V by 33°. This complication is in order to reduce the chroma bandwidth separately for the two signals. The accuity of the human eye is much worse along the magenta-green Q-axis than the orange-cyan I-axis.

It is physically impossible to modulate 100% saturated bars onto the NTSC System M vision carrier without severe distortion, so test signals having 75% of the amplitude of 100% bars are used. It is necessary to ensure that high saturation values of certain colours are not included in programme material. The same is true of most 625-line systems, though System I is theoretically capable of carrying 100% bars, with the minimum carrier excursion not quite reaching zero modulation.

SECAM ignores the U and V components, as different scaling factors are used to produce signals called D'R and D'B, which are processed, modulated and transmitted on alternate scanning lines. See the SECAM section of the Colour Standards page.

WE HAVE already seen that the standard monochrome composite video signal has a peak to peak amplitude of 1V into 75ohm. With blanking level at 0V, sync tips are -300mV and peak white is +700mV. This is the same for the luminance component of a colour signal as well, though the levels of the colour components depend upon the use to which the signal is to be put.

100% colour bars (for further details of colour bars and other test patterns see the Colour Bars section of the Test Cards page) provide a source of the extremes of voltage excursions allowed by the colour tv system (its 'gamut'). In the diagrams below, the colour of the trace indicates the signal being considered and the background colour indicates the bar being displayed. Note that the waveforms below represent actual voltages, while the values indicated in the Colour Bars section are normalised - that is, black = 0 and white = 1.

Red component waveform Green composite waveform Blue component waveform


G composite


The gamma corrected non-composite red, green and blue colour separation signals each have a maximum amplitude of 700mV. Synchronisation is either by separate wire(s), carrying mixed syncs or separate H and V sync pulses, or by incorporating mixed syncs onto one (usually green) or all of the separation signals. The domestic SCART system uses non-composite 700mV RGB signals with the display being synchronised by the accompanying encoded composite signal, which has to be carried for compatibility with non-RGB input capable displays.

Luminance composite waveform Blue difference component waveform Red difference component waveform

Y composite



These values of RGB yield luminance (Y) and colour difference (B-Y and R-Y) signals of amplitudes 0-700mV, ±620mV and ±491mV respectively. These are the (relative) values that must be matrixed to recover the correct RGB signals in the decoder, but the colour difference signals are unsuitable for distribution at these levels.

PAL CVBS waveform U component waveform V component waveform

PAL composite



To produce an encoded PAL signal the colour difference signals, after scaling, are amplitude modulated onto two supressed quadrature subcarriers. In order to achieve levels of subcarrier that can be further amplitude modulated onto a vision carrier, whilst maintaining reasonable signal-to-noise ratios for the two signals, the weighting factors are as follows:

E'U = 0.493(E'B - E' Y)
E'V = 0.877(E'R - E' Y)

These particular weighting factors ensure that the maximum subcarrier excursions are around 33% above white level for saturated yellow and cyan colour bars and 33% below black level for red and blue bars.

The p-p subcarrier amplitude (indicated on the composite waveform by blocks of saturated colour) is twice the vector sum of the amplitudes of the U and V signals for each bar. However, because the frequency response of the system is not always perfect, the recovered subcarrier amplitude, and hence the amplitudes of the demodulated colour difference signals, may be higher or lower than this. In the PAL and NTSC standards that would affect the saturation of displayed colours, so the colour burst signal (shown here in grey) inserted into the front porch of the line blanking period in order to synchronise the reinserted local subcarrier has an amplitude of 300mV p-p which is used as the reference level for the automatic chroma gain control to ensure that the subcarrier is demodulated with the correct amplitude.

The three purple plimsoll lines at the top left of the composite waveform represent zero carrier level in Systems I, B/G/D/K and M, for which the modulation depths for peak white are respectively 20%, 15% and 10% (100% modulation is sync tip level in all three cases). It is clear that none of the systems can safely carry the full gamut of 100% saturated colours without severely distorting the transmitted signal.

Luminance composite waveform Pb component waveform Pr component waveform

Y composite



Although four-wire (with additional control wires) RGB signals carried by the French SCART (Peritel) interconnection system have been in use in Europe since the early 1980s, the preferred method in NTSC countries is three-wire component, called YPbPr. The colour difference signals are individually scaled in order to obtain a maximum amplitude of ±350mV for them both, the same swing as the black-to-white portion of the luminance signal. The scaling factors are as follows:

E'Pb = 0.564(E'B - E' Y)
E'Pr = 0.713(E'R - E' Y)

HD Luminance composite waveform HD Pb component waveform HD Pr component waveform

HD Y composite



A different set of coefficients for the matrixing equations is used for high definition signals. (See the summary of equations section on the Colour Standards page.) The R, G and B signals at the start and end of the chain remain the same, but the component signals along the way are different from their standard definition counterparts. In particular, the luma signal is rather different, as can be seen by the 100% colour bar waveform. The scaling factors for high definition are as follows:

E'Pb = 0.5(E'B - E' Y)/(1-0.0722)
E'Pr = 0.5(E'R - E' Y)/(1-0.2126)

Luminance composite waveform Cb component waveform Cr component waveform

Y composite



When component signals are digitised a 350mV pedestal is added to the scaled colour difference signals in order to bring them into the same 0-700mV range as the luminance signal. The trio is then termed YCbCr. The luminance signals are sampled at 13.5MHz and the chrominance signals at 6.75MHz for both 525- and 625-line standards, as specified in the document ITU-R BT.601. The eight-bit quantisation levels (range 0-255 decimal) corresponding to the 0, 700mV luminance signal are 016 and 235 decimal. For the 0, 350, 700mV colour difference signals they are 016, 128 and 240 decimal. The negative-going synchronisation pulses are not digitised since they are outside the sampling period (720 samples @ 13.5MHz over 576 scanning lines in the 625-line system and 480 lines in the 525-line system).

Luminance composite waveform Cb component waveform Cr component waveform

HD Y composite



SD and HD TCJ (luma only)

Again, the HD waveforms look slightly different from the SD ones.

It is unlikely that anyone would want to display a colour HD picture in monochrome, (though it could be done easily enough by unplugging the red and blue phono plugs if analogue component connections are being used) but it might look a little odd when compared with its SD equivalent. Greens, turquoises and yellows would look brighter, and blues, reds and purples rather darker than we are used to. For that reason any HD receiver (or indeed, broadcaster) must recode the SD signals it sends out with the correct coefficients so that a traditional B&W or colour telly can display a proper picture.

These monochrome renditions of Test Card J (SD at the top and HD at the bottom), which have identical RGB components, demonstrate how the luma components differ. Of course, just like the grey parts of these test cards, a black-and-white picture - an old film, say - would have the same appearance on both SD and HD sets. It is only the rendering as greys of the coloured areas of the transmitted picture that is different.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |



THE FINAL leg of the journey is to get the video, and associated audio signals, into the home. The art of radio transmission was quite mature at the time of the introduction of television, but to transmit the vast bandwidth of a video signal (one thousand times wider than an audio signal) required carrier frequencies of an order of magnitude higher than those that had been used previously for broadcast purposes.

Also, the theory of modulating a varying signal onto a carrier wave had to be rethought. Because audio signals are symmetrical about a mean value, it is only necessary to ensure that that value equates to 50% of the carrier level of the output of the transmitter. As the modulation level increases, the positive and negative peaks move towards 100% and 0% carrier respectively.

Effect of black-level clamp

However, a vision signal is not like that. The negative part of the signal between 0 and -300mV is constant, and carries the synchronisation pulses. The positive portion between 0 and +700mV carries the vision proper, and depending on the amount of white in the picture the mean level of the signal can vary between around -30mV and +950mV. Such a signal would be useless for a modulator that expected a constant 50% mean level, because the positive and negative peaks could float above 100% and/or below 0% carrier levels according to picture content. Alan Blumlein introduced the concept of DC restoration, whereby the video signal presented to the modulator is 'clamped' so that blanking level represents a fixed modulation level.

Waveform of a carrier modulated with a line of positive video
Oscilloscope trace of a System L carrier modulated with two lines of monochrome video using positive modulation
Waveform of a carrier modulated with a line of neagtive video
Oscilloscope trace of a System I carrier modulated with two lines of monochrome video using negative modulation

The first frequencies chosen for television transmission were around 50MHz in the hitherto unexplored vhf band I (Alexandra Palace had vision at 45.0MHz and sound at 41.5MHz). Amplitude modulation was used for the vision signal and although the original Alexandra Palace transmitter radiated the full double-sideband signal, all later transmissions have been vestigial sideband, with one of the sidebands filtered out beyond the first few hundred kilohertz. The sound carrier was placed just beyond the radiated vision sideband, where the video modulation energy was relatively low.

The possible options of vision modulation sense, sound modulation mode and vision sideband supression, in addition to the various line standards, has led to a multitude of different transmission standards around the world.

Vision modulation may be either positive or negative. With positive modulation, the sync pulse tips are held at the zero-modulation level, whilst peak white is 100% and black level around 30%. With negative modulation, the sync tips are at 100%, black and blanking levels around 75% and peak white 10-20%, depending on the precise transmission system used. This method has the advantage that there is a portion of the waveform that is always at 100% modulation, so that the receiver can measure the carrier strength and adjust its automatic gain control accordingly.

The sound carrier can be amplitude or frequency modulated. The convention is that amplitude modulated sound is used with positive vision modulation because in the intercarrier sound method of frequency modulation detection the sound carrier and vision carriers are mixed together to give an accurate intermediate frequency set by the transmitter, allowing them to share a common intermediate frequency amplifier chain in the receiver. In positive vision modulation the carrier level falls to zero during sync tips, making it unsuitable for this purpose and so AM sound, which is not so sensitive to local oscillator drift in the receiver, is used, with a separate intermediate frequency amplifier from that used for the vision signal. However, AM sound requires a larger amplitude of carrier than FM for the sound and vision service areas to match.

The choice of which vision sideband to supress is immaterial for most purposes, except that it affects the position of the sound carrier relative to the vision carrier. Indeed, the French System E 819-line network had a mixture of upper and lower sideband transmissions shoehorned into bands I and III in order to provide more useable channels.

Analogue direct-to-home satellite broadcasts by comparison use frequency modulated video with a vision bandwidth of just over 5MHz and several frequency modulated sound carriers (used in pairs for stereo) between just below 6MHz and 8MHz. The channel width is about 27MHz compared with the 8MHz of terrrestrial systems B,G, I D and K.

Stereo and multilingual soundtracks have been added to analogue terrestrial transmissions in many countries. These have been incorporated either by multiplexing the existing sound carrier or by adding further analogue or digital carriers. These are detailed in the table of CCIR transmission systems.

|Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |

Vestigial Sideband


WHEN A carrier wave is amplitude modulated, its amplitude varies in sympathy with the modulating waveform. This is shown for a video transmission in the diagrams above, where the envelope of the waveform is shown as a full line. In reality the envelope is simply defined by the peak tips of the carrier wave and the modulating waveform itself is not sent. These diagrams show what is happening in the time domain.

Spectrum of AM audio signal

Frequency spectrum of an amplitude modulated telephony signal

The frequencies marked are relative to the carrier. The levels and slopes of the curves are stylised for clarity.

It is not obvious what happens in the frequency domain. In fact when two sinusoidal frequencies (pure tones) are mixed together in a non-linear way (as happens in amplitude modulation) the result comprises the two original frequencies as well as their sum and their difference. In a radio signal, one of these frequencies is called the carrier - this is the frequency to which you tune your receiver. The original baseband modulating frequencies are filtered out of the transmission as they are not required. In an audio am transmission for example, the rf portion that is transmitted comprises the carrier wave itself, plus the sum of it and every audio frequency in the modulating signal gathered together in what is called a sideband. A mirror image of this sideband comprises the difference between the carrier frequency and all the contributing audio frequencies. These are called the upper and lower sidebands.

Note that the amplitude of each sideband is half that of the carrier, and that the bandwidth of the transmission is twice that of the highest modulating audio signal, which in this case is around 3kHz (it is a telephony communications, rather than entertainment broadcast, signal). In the simplest form of demodulator, called an envelope detector these three signals are used to recover the original audio signal, which was filtered out of the transmission.

Now, the information contained in each sideband is exactly the same, and to send it twice is wasteful of bandwidth and power. The carrier wave, once it has been used to generate the two sidebands, carries no information whatsoever - either sideband would fly just as far without it. In telephony communications therefore, both the carrier wave and one sideband are often filtered out, and the receiver regenerates the original audio - or a close enough approximation - by inserting a locally generated carrier and using that to demodulate the surviving sideband. Because the amplitude, frequency and phase of the original carrier cannot be known exactly, some skill on the part of the operator is required to resolve intelligible speech as opposed to garbled Donald Duck noises.

This method of transmission, called single sideband, supressed carrier (the supressed carrier part of the description is usually, ahem, supressed) is clearly unsuitable for music or entertainment, but it is possible to transmit a smidgeon of the original carrier wave in what is called single sideband, reduced carrier to which the locally generated carrier signal may be synchronised in phase.

Double sideband amplitude modulation however, survives on long, medium and short wave radio broadcasts (and in some television systems) for the sake of simplicity in the receiver. But to use it for video would be wasteful of both bandwidth and power, and would have made the design of suitable high-bandwidth, high gain receivers difficult in the early years. Equally, it would be difficult to filter out the carrier and the whole of one sideband at the transmitter without introducing distortion into the other sideband (unlike telephony audio, which contains no energy below about 300Hz, a video signal contains a dc component - 0Hz - resulting in sidebands that converge upon the carrier frequency), and it would not have been easy to design a simple ssb video demodulator for the receiver. For these reasons the first 405-line station at Alexandra Palace in London radiated dsb vision, and the early receivers had dsb detectors.

Frequency response of an ideal VSB receiver

Frequency spectrum of a System I vestigial sideband transmission showing ideal receiver frequency response

The frequencies marked are relative to the vision carrier. The levels and slopes of the curves are stylised for clarity.

But in time, in true British style the boffins came up with a compromise. The whole of the carrier and one sideband would be transmitted, together with a bit (a vestige) of the other. This has several advantages. The remaining sideband suffers no distortion in the transmitter and an unmodified dsb envelope detector can be used in the receiver. All that is required is an rf (or if - intermediate frequency - in the case of the new-fangled superheterodyne receivers) response that is tailored to suit the incoming vestigial sideband transmission. In this, the response near the carrier frequency is reduced such that the carrier itself is received at half strength and the response tails off as it penetrates the vestigial sideband. In this way the lower video frequencies are received in both sidebands and the upper frequencies come from the full sideband alone, albeit a little distorted by the action of the envelope detector in the presence of only one sideband.

More modern receivers with synchronous detectors do not suffer from this distortion, and the whole video spectrum is recovered from the full sideband alone. Vestigial sideband receivers of either vintage may be used to receive double sideband transmissions. This is just as well, since the cheap uhf modulators incorporated into video games, vcrs, satellite receivers etc all operate on dsb.

Vestigial sideband is also used in NTSC and PAL colour signals, though it is seldom mentioned as such. In these two standards the two colour difference signals are amplitude modulated onto two subcarriers of the same frequency with a 90° phase difference. Because of the 90° phase shift, one subcarrier is at a maximum amplitude excursion when the other is crossing zero, and this enables the two colour signals to be completely separated in the demodulator. The actual subcarriers are supressed before multiplexing with the video (luminance) signal, and about ten cycles of subcarrier, called the colour burst, are added to the horizontal blanking interval as a reference for the demodulator in the receiver.

If the full dsb signal comprising subcarrier and sidebands were to be added to the video signal it would produce a strong interference pattern all over the picture. We have seen that the amplitude of the carrier must be twice that of the highest modulating signal, and so the whole signal would have to be attentuated severely to enable it to be transmitted at all, and then more to reduce the interference level, resulting in a very poor signal-to-noise ratio for the colour information. The dsb supressed carrier signal can be inserted at a higher level, since only very saturated colours have significant sideband amplitudes - in fact the amplitude for greys is zero.

This mode is often called QAM (quadrature amplitude modulation), particularly in digital communications.

The bandwidth of the colour signals is restricted to 1.3MHz and with a subcarrier frequency of 4.43361875MHz in the PAL 625-line standard that would give an upper sideband excursion of about 5.7MHz. In Systems B/G, I and D/K the video cut-off frequencies are respectively 5.0, 5.5 and 6.0 MHz. These are nominal, built into the specifications for each standard, but often the video is cut off around 5MHz for all standards. So the colour information is effectively vsb, but since there is little energy in the removed portion of the sideband no account of this is usually taken in the PAL receiver.

In the case of NTSC however, things are much tighter. With a 3.57954545MHz subcarrier and a 4.2MHz video bandwidth in System M, only 600kHz is available for the usb of the colour signal. Some fiddling is done to ensure that one colour signal corresponds to the colours to which the human eye has least accuity - it sees them blurred in other words. This (called the Q signal) is given a 500kHz bandwidth while the other (I) is afforded the 'full' 1.3MHz. However, the reduction of the usb of the I signal to 500kHz in transmission means that either the receiver must filter the whole chrominance signal to 500kHz or perfom ssb demodulation on the 1.3MHz lower sideband of the I signal in order to avoid severe distortion caused by demodulating the vsb colour signal as a dsb one.

Related sections: | E-mail me | Home Page | 405-Line Standard | Test Cards | Teletext |
World TV section: | Overview | Line Standards | Colour Standards | CCIR Systems | Radio Channels |
This page: |Top | Contents | Timeline | Scanning | Interlace | Aspect Ratio | Resolution | Gamma | Colour | Levels | Transmission | Bookmarks |

Page 2: Line StandardsNext Page


Television Website Bookmarks


Mike Brown/MB21/
Andrew Emmerson/Paul Stenning/405 Alive/British Vintage Wireless Society
Keith Hamer
Darren Meldrum
Richard Russell
Justin Smith/Aerials and TV
Andrew Wiseman/625 Room
Bill Wright

Back To TopBack To Top

Pembers' PonderingsPembers' Ponderings

Compiled by Alan Pemberton
Sheffield, South Yorkshire, England
Email me