sealPurdue News
____

April 23, 2003

Engineers aim to make average singers sound like virtuosos

WEST LAFAYETTE, Ind. – Karaoke may never be the same, thanks to research being presented in Nashville detailing the latest findings in efforts to create a computerized system that makes average singers sound like professionals.

AUDIO
  • Voice 1 before (7 seconds)
  • Voice 1 after (7 seconds)
  • Voice 2 before (7 seconds)
  • Voice 2 after (7 seconds)
  • "Our ultimate goal is to have a computer system that will transform a poor singing voice into a great singing voice," said Mark J.T. Smith, a professor and head of Purdue University's School of Electrical and Computer Engineering.

    To that end Smith, a former faculty member at the Georgia Institute of Technology, is working with Georgia Tech graduate student Matthew Lee to create computer models for voice analysis and synthesis. These models, or programs called algorithms, break the human singing voice into components that can then be modified to produce a more professional-sounding rendition of the original voice.

    Far more work is needed before the system is finished, Smith said. He said the specialized programs are, however, able to alter certain important characteristics of a person's voice, such as pitch, duration, and "vibrato," or the modulation in frequency produced by professional singers.

    Lee will present the latest research findings on April 30 during the 145th Meeting of the Acoustical Society of America in Nashville, Tenn., the nation's country music capital. Lee will demonstrate the system by playing before-and-after country music audio clips to researchers attending the conference.

    The system uses a special technique to break down the original voice. The voice is then reconstructed using a mathematical method called the fast Fourier transform, which enables the system to resynthesize the voice quickly.

    Smith, who specializes in an area of electrical engineering known as signal processing, began working on the underlying "sinusoidal model" in the mid-1980s with former doctoral student E. Bryan George, who pioneered the method. The model enables the human singing voice to be broken into components, or sine wave segments. More recently, Smith and Lee developed a method for modifying sine wave parameters in the segments to improve the quality of singing.

    "While we have had success in improving the quality of the singing voice samples in our database, we have a way to go before we are able to handle all types of voices reliably," Smith said. "There are many challenges in developing a system of this type.

    "Being able to characterize the properties of a good voice in terms of the sine wave components that we compute is not a trivial task. The problem is further complicated by the wide variety of singing styles and voice types that are present in our population."

    For example, the sine wave components for male voices and female voices are significantly different.

    "It turns out that we are having greater difficulty with the male singers than with the female singers," Smith said. "The higher pitched voices are easier for us to work with, in general."

    Other challenges include finding ways to improve a person's singing without dramatically altering the original voice, identifying the parameters that need to be modified for specific types of quality improvements, and then operating the system in real time on available hardware.

    An important feature of the sinusoidal model technique is an "overlap-add" construction, in which a singing voice is partitioned into segments and processed in blocks. The model is designed around blocks that overlap, which results in voice synthesis that sounds natural and not choppy, Smith said.

    Singing is first converted into a sequence of numbers, which is modified into a new set of numbers that represents a more professional singing voice. The new numbers are then fed to a digital-to-analog converter and to a speaker, Smith said.

    The sinusoidal model Smith and Lee use could have broader applications, such as synthesizing musical instruments and improving the quality of text-to-speech programs in which words typed on a computer are automatically converted into spoken language. Former Georgia Tech doctoral student Michael Macon and his adviser Mark Clements used the sinusoidal model Smith and George developed to create a system that changes text into speech and typed lyrics into singing.

    Other possible applications include programs for the hearing-impaired that make it easier to hear speech and systems that change the playback speed of digital recordings.

    "The idea of digitally enhanced human singing has been brewing in my mind for a long time," Smith said. "What I would really like is for us to cut an album one of these days."

    Early portions of the research were funded by the National Science Foundation.

    Writer: Emil Venere, (765) 494-4709, venere@purdue.edu

    Sources: Mark J.T. Smith, (765) 494-3539, mjts@purdue.edu

    Matthew Lee, (404) 664-8323, mattlee@ece.gatech.edu

    Purdue News Service: (765) 494-2096; purduenews@purdue.edu

     

    NOTE TO JOURNALISTS: Broadcast-quality audio clips are available at https://www.purdue.edu/uns/uns/html4ever/030423.Smith.singing.html. The audio clips also are available from Emil Venere, (765) 494-4709, venere@purdue.edu.


    ABSTRACT

    Analysis and Enhancement of Country Singing

    Matthew Lee, Center for Signal & Image Processing

    Georgia Institute of Technology

    Mark J.T. Smith, School of Electrical and Computer Engineering

    Purdue University

    The study of human singing has focused extensively on the analysis of voice characteristics. At the same time, a substantial body of work has been under study aimed at modeling and synthesizing human voice. The wok on which we report brings together some key analysis and synthesis principles to create a new model for digitally improving the perceived quality of an average singing voice. The model presented employs an analysis-by-synthesis overlap-add (ABS-OLA) sinusoidal model, which in the past has been used for analysis and synthesis of speech, in combination with a spectral model of the vocal tract. The ABS-OLA sinusoidal model for speech has been shown to be a flexible, accurate, and computationally efficient representation capable of producing a natural-sounding singing voice (E.B. George and M.J.T. Smith, Trans. Speech Audio Processing, 389-406 (1997). A spectral model infused in the ABS-OLA uses Generalized Gaussian functions to provide a simple framework which enables precise modification of spectral characteristics while maintaining the quality and naturalness of the original voice. Furthermore, it is shown that the parameters of the new ABS-OLA can accommodate pitch corrections and vocal quality enhancements while preserving naturalness and singer identity. Examples of enhanced country singing will be presented.


    * To the Purdue News and Photos Page