Speech recognition technology has made spoken interaction with machines feasible. However, no universal interface has yet been proposed for human to communicate effectively, efficiently and effortlessly with machines via speech. Since the existing speech interfaces are language specific and language dependent, it is proposed to standardize the interface to make it language independent and adaptive to local dialects. The two existing primary models of speech recognition (Sehultz and Waibel, 2001) the acoustic model (analyses the sound of voice and convert them to phonemes) and the language models (compares the combination of phonemes to the words in its digital dictionary)-are language and speaker dependent. The extension of these models, Large Vocabulary Continuous Speech Recognition, is proposed in this study. Monolingual recognizers for multiple languages are designed first and the entire collection of phoneme set is called as GlobalPhone database. Based on this global unit set, it is aimed to make the resulting multilingual acoustic model as language independent. The phonemes with common sound are refined from GlobalPhone database to form a set of International Phonetic Alphabets (IPA). From this, the phonemes can be derived for any target language. To make the interface adaptive to different dialects, phoneme models of arbitrary context width called polyphones from resulting target language are maintained. By evaluating the Polyphone Decision Tree Specification, which is context-relative of that target language, the recognizer can be adapted to all accents and dialects of that particular language.