[further explanation needed] Before the commercial introduction of transistors in the 1950s, electronic amplifiers used vacuum tubes (known in the United Kingdom as "valves").
By the 1960s, solid state (transistorized) amplification had become more common because of its smaller size, lighter weight, lower heat production, and improved reliability.
[6] Some musicians[7] prefer the distortion characteristics of tubes over transistors for electric guitar, bass, and other instrument amplifiers.
Possible explanations mention non-linear clipping, or the higher levels of second-order harmonic distortion in single-ended designs, resulting from the tube interacting with the inductance of the output transformer.
[9] Since they concentrate in the origins of the distortion, they are mostly useful for the engineers who develop and design audio amplifiers, but on the other hand they may be difficult to use for the reviewers who only measure the output.
[10] A huge issue is that measurements of objective nature (for example, those indicating magnitude of scientifically quantifiable variables such as current, voltage, power, THD, dB, and so on) fail to address subjective preferences.
Musical instrument amplifier design deliberately introduces distortion and great non-linearities in frequency response.
Notable exceptions are various "OTL" (output-transformerless) tube amplifiers, pioneered by Julius Futterman in the 1950s, or somewhat rarer tube amplifiers that replace the impedance matching transformer with additional (often, though not necessarily, transistorized) circuitry in order to eliminate parasitics and musically unrelated magnetic distortions.
An amplifier with little or no negative feedback will always perform poorly when faced with a speaker where little attention was paid to the impedance curve.
Large amounts of feedback, allowed by transformerless circuits with many active devices, leads to numerically lower distortion but with more high harmonics, and harder transition to clipping.
[15][28] Mastering engineer R. Steven Mintz wrote a rebuttal to Hamm's paper, saying that the circuit design was of paramount importance, more than tubes vs. solid state components.
[29] Hamm's paper was also countered by Dwight O. Monteith Jr and Richard R. Flowers in their article "Transistors Sound Better Than Tubes", which presented transistor mic preamplifier design that actually reacted to transient overloading similarly as the limited selection of tube preamplifiers tested by Hamm.
In fact, the generic triode gain stages can be observed to clip rather "hard" if their output is scrutinized with an oscilloscope.
Early tube amplifiers often had limited response bandwidth, in part due to the characteristics of the inexpensive passive components then available.
Another limitation is in the combination of high output impedance, decoupling capacitor and grid resistor, which acts as a high-pass filter.
Modern premium components make it easy to produce amplifiers that are essentially flat over the audio band, with less than 3 dB attenuation at 6 Hz and 70 kHz, well outside the audible range.
While the absence of NFB greatly increases harmonic distortion, it avoids instability, as well as slew rate and bandwidth limitations imposed by dominant-pole compensation in transistor amplifiers.
However, the effects of using low feedback principally apply only to circuits where significant phase shifts are an issue (e.g. power amplifiers).
On the other hand, the dominant pole compensation in transistor amplifiers is precisely controlled: exactly as much of it can be applied as needed to strike a good compromise for the given application.
When the tube amplifier was operated at high volume, due to the high impedance of the rectifier tubes, the power supply voltage would dip as the amplifier drew more current (assuming class AB), reducing power output and causing signal modulation.
This crossover distortion was found especially annoying after the first silicon-transistor class-B and class-AB transistor amplifiers arrived on the consumer market.
Earlier germanium-based designs with the much lower turn-on voltage of this technology and the non-linear response curves of the devices had not shown large amounts of cross-over distortion.
As such, it most certainly refers to "ear fatigue" distortion commonly found in existing tube-type designs; the world's first prototype transistorized hi-fi amplifier did not appear until 1955.
The resulting sound pressure level depends on the sensitivity of the loudspeaker and the size and acoustics of the room as well as amplifier power output.
For example, a 10 W stereo SET uses a minimum of 80 W, and typically 100 W. The special feature among tetrodes and pentodes is the possibility to obtain ultra-linear or distributed load operation with an appropriate output transformer.
The majority of modern commercial Hi-fi amplifier designs have until recently used class-AB topology (with more or less pure low-level class-A capability depending on the standing bias current used), in order to deliver greater power and efficiency, typically 12–25 watts and higher.
Class-AB push–pull topology is nearly universally used in tube amps for electric guitar applications that produce power of more than about 10 watts.
Some individual characteristics of the tube sound, such as the waveshaping on overdrive, are straightforward to produce in a transistor circuit or digital filter.
More recently, a researcher has introduced the asymmetric cycle harmonic injection (ACHI) method to emulate tube sound with transistors.
Some enthusiasts, such as Nelson Pass, have built amplifiers using transistors and MOSFETs that operate in class A, including single ended, and these often have the "tube sound.