Generated by GPT-5-mini| The Sound | |
|---|---|
| Name | The Sound |
| Field | Acoustics |
| Related | Isaac Newton, Lord Rayleigh, Albert Einstein, Hermann von Helmholtz |
The Sound
The Sound is the phenomenon of mechanical waves in an elastic medium perceived by auditory systems in organisms and measured by instruments in the fields of Acoustics, Physics, Engineering, and Neuroscience. It encompasses generation, transmission, and reception processes, linking practical technologies such as loudspeakers, microphones, sonar and ultrasound with scientific disciplines including Psychoacoustics, Signal processing, Fluid dynamics and Materials science. Its study involves figures and institutions such as Isaac Newton, Hermann von Helmholtz, Lord Rayleigh, Bell Labs and Nobel Prize research.
Sound is defined as longitudinal mechanical disturbances that propagate through media like air, water, and solids via pressure and particle displacement, characterized by properties such as frequency, wavelength, amplitude, phase, timbre, and velocity. Frequency relates to pitch and is measured in hertz; amplitude relates to loudness and is often quantified by sound pressure level; timbre depends on harmonic content and is analyzed using Fourier transform and spectral analysis. Typical human hearing spans about 20 to 20,000 hertz, a range investigated by Giovanni Battista Morgagni-era anatomy and later by Alexander Graham Bell and Georg Ohm in telephone and acoustics research.
Sound production arises from vibrating sources such as vocal folds in human voices, strings in violins, reeds in clarinets, and diaphragm motion in loudspeakers and microphones. Mechanical generators include impacts (e.g., drums), aerodynamic sources (e.g., jet engines), and electroacoustic transducers developed at institutions like Bell Labs and Fraunhofer Society. Propagation follows wave equations derived by Isaac Newton and Jean le Rond d'Alembert with solutions influenced by medium properties studied by Lord Rayleigh; phenomena include reflection, refraction, diffraction, absorption, and scattering, relevant to environments such as concert halls designed by architects working with consultants from Acoustical Society of America and Institute of Acoustics.
Perception of sound is mediated by auditory organs such as the cochlea and neural pathways to the auditory cortex studied by researchers at Harvard University, Massachusetts Institute of Technology, and University College London. Psychoacoustics examines masking, critical bands (concepts by Fletcher and Munson and further refined in Moore's work), loudness scaling, localization using interaural time and level differences first formalized by Lord Rayleigh, and pitch perception modeled by Hermann von Helmholtz and contemporary computational neuroscience groups at Max Planck Institute for Human Cognitive and Brain Sciences. Perceptual phenomena are exploited in technologies like MP3 compression, Dolby Laboratories surround sound, and hearing devices developed by companies such as Cochlear Limited and research at Johns Hopkins University.
Sound is quantified using instruments including microphones, hydrophones, sound level meters, and impedance tubes; units include pascal for pressure and decibel for logarithmic ratios referenced to 20 micropascals for air. Spectral measures such as A-weighting and C-weighting adjust levels for human hearing and are standardized by organizations like International Organization for Standardization and International Electrotechnical Commission, with calibration traceable to national metrology institutes such as NIST and PTB. Time-frequency analysis employs short-time Fourier transform and wavelet transform, while propagation losses use models derived from Navier–Stokes equations and empirical standards from ISO and ANSI committees.
Applications span communications (telephones, radio), sensing (sonar, seismic surveying, ultrasound imaging), entertainment (music recording and film soundtracks), industrial processes (non-destructive testing, acoustic levitation), and consumer electronics (smartphone speakers, headphones). Technologies include digital signal processing algorithms from groups at Bell Labs and MPEG standardization, spatial audio formats by Dolby Laboratories and DTS, and biomedical devices such as cochlear implants and diagnostic ultrasound systems developed by institutions like Mayo Clinic and ETH Zurich. Emerging fields integrate machine learning from Google and DeepMind for source separation and environmental monitoring using networks of sensors similar to deployments by NOAA and USGS.
Exposure to high sound levels causes auditory damage such as noise-induced hearing loss researched by World Health Organization, NIOSH, and WHO guidelines; regulations and standards from OSHA and EPA set permissible exposure limits and environmental noise criteria. Sound pollution affects wildlife behavior and ecosystems studied by ecologists at Smithsonian Institution and WWF, influencing conservation policy related to shipping noise, offshore wind farms, and urban planning by municipal agencies. Acoustic remediation employs barriers, active noise control technologies by firms like Bose Corporation, and urban design informed by research at MIT and TU Delft to mitigate adverse effects on public health and biodiversity.
Category:Acoustics