Design of audio system based on TSC2101 under Windows CE

This article refers to the address: http://

Windows CE is an open, croppable, 32-bit real-time embedded operating system. It has the characteristics of high reliability, high real-time performance and small core size, so it is widely used in the development of various embedded intelligent devices. Its applications include industrial control, information appliances, mobile communications, automotive electronics, personal consumer electronics, etc. The field is the most widely used and fastest growing embedded operating system. In these embedded applications, audio modules are an integral part of most products. This paper constructs an audio system based on Intel Xscale PXA272 and TSC2101 audio chip for Windows CE operating system, and briefly introduces its implementation method.

Hardware implementation of the audio system


The audio driver in this design is implemented by the Unified Audio model. Based on the Intel Xscale PXA272 processor and TI's TSC2101 audio chip, the audio system architecture based on I2S (Inter-IC Sound) bus is used. The system schematic is shown in Figure 1. . The Intel Xscale PXA272 integrates an I2S controller to process audio data over the I2S bus. Other signals (such as control signals) need to be transmitted separately. In this design, the SSP serial port of the Xscale PXA272 chip is configured as an SPI serial port to realize the transmission of control signals.

Figure 1 System schematic


I2S is a serial digital audio bus protocol proposed by Philips. The I2S controller of the PXA272 controls the I2S link. The I2S controller consists of a data buffer, status and control registers, and counters. They connect the system memory to the peripheral's audio decoder chip (TSC2101) to produce synchronized audio. When playing an audio file, the I2S controller sends the digitized sound samples in the system memory to the peripheral TSC2101 audio decoder chip through the I2SLINK connection, and then the digital audio signal is converted into an analog signal by the TSC2101 chip's digital-to-analog converter. For recording, the I2S controller receives digital signals from an external TSC2101 audio chip and stores them in system memory. I2S provides the common I2S and MSB-justified-I2S formats. The TSC2101 chip and the PXA272's I2S controller are connected by 5 pins to form a channel for audio data transmission. The signals required by the I2S controller are mainly: a rate clock that can reference an external or internal clock source; a control signal that provides "left/right" channel control information; two serial audio pins, one output and one input; Rate clock, the I2S controller will also send the optional system clock signal to the external decoder.


The I2S controller is accessed via DMA. In DMA mode, the DMA controller can only access the FIFO through the Serial Audio Data Register (SADR). The DMA controller typically accesses FIFO queue data in blocks of 8, 16, or 32 bytes in size.


The audio chip TSC2101 used in this design integrates stereo audio decoding and touch screen control chip. The stereo DAC can play audio files at a sampling rate of up to 48Kb/s, which is designed for PDA, PMP, smart phone and MP3 player. The TSC2101 integrates a speaker amplifier, headphone amplifier and four-wire touch screen controller with an audio codec, with a stereo headset interface, a handset interface, a mono 8Ω speaker amplifier and a 32Ω receiver driver And integrated with a battery monitor and an on-chip temperature sensor.


The circuit design of the TSC2101 chip is shown in Figure 2.

Figure 2 TSC2101 chip circuit design


This design is the application of TSC2101 in the smart phone. CP-IN is the voice input of the communication module, CP-OUT is the output of the audio system to the communication module. In practical applications, MIC1 can pass the internal PGA of TSC2101 (programmable gain Amplification), AGC (Automatic Gain Control) circuit is connected to CP-OUT to realize the microphone function of the smartphone; at the same time, the MIC1 input can also sample the voice data through the internal ADC and transmit it to the processor memory space through the I2S bus to realize the recording function. . Of course, while the smartphone is talking, the call recording function can also be implemented. The 38 to 41 pins in the circuit diagram are SPI interfaces, 42 to 46 pins are I2S control pins, pins 9 to 12 are touch screen inputs, pins 27 and 28 are audio outputs for headphones, and pins 26 are connected to handsets. Pins 33 and 35 are connected to external speakers.


Audio driver with Unified Audio model


Audio-driven implementations include the MDD-PDD layered mode and the un-layered Unified Audio model. As a way to directly implement the stream interface, MDD-PDD uses the Model Device Driver (MDD) library provided by Microsoft - Wavemdd.dll. This library implements stream interface functions based on the Audio Device Driver Service Provider Interface (DDSI) function. If Wavemdd.dll is used, a matching Platform Dependent Driver (PDD) library must be generated that implements the audio DDSI function. This PDD library is usually called Wavepdd.lib. Then connect the two libraries to form Wavedev.dll.


Another way to drive audio is to use the Unified Audio model, a non-layered audio driver model that supports the standard waveform drive interface. In this design, the audio driver is used in this way (Platform Builder's driver directory includes example code based on this model driver). In a layered audio driver, the driver consists of MDD and PDD. The MDD layer performs hardware platform-independent functions. The PDD layer is directly related to the hardware platform. In the Unified Audio model, MDD and PDD points. The layer is unnecessary, and Figure 3 is the audio driver structure of the Unified Audio model.

Figure 3 Audio drive structure of the Unified Audio model


In this model, the audio driver is still implemented in the form of a stream interface, which implements WAV-close(), WAV-PowerDown(), WAV-Deinit(), WAV-PowerUp(), WAV-Init(), WAV-Read(), WAV-IOControl(), WAV-Seek(), WAV-Open(), WAV-Write() are standard stream interface functions.

DMA buffer design and implementation


Since the audio device driver design requires high real-time performance of the device, the DMA buffer design and the reasonable use of the buffer area to speed up the processing of the audio data, reducing the delay becomes very important.


The DMA controller is to make the CPU process other data bus-independent processing, and the DMA controller is responsible for the data transmission mechanism. This mechanism frees the CPU from heavy data transmission and can perform other calculations, thus improving system operation. speed. The PXA272's DMA controller provides 32 DMA channels, 0 to 31. These channels provide flow-through and fly by data transfer.


In this design, using the double buffer DMA channel design, as shown in Figure 4, while the CPU is processing a certain buffer area data, the DMA controller can complete the transfer of another buffer area data, so alternately, Improve the parallelism of the system and improve the real-time performance of audio processing.


In the double buffer driver design, taking broadcast as an example, the new audio data is first written to the cache 1 under the control of the CPU, and the DMA controller is processing the data transfer of the cache 2 at this time. When the data of the buffer 2 is completely transmitted, a DMA interrupt is generated, which notifies the CPU to start writing new audio data into the buffer 2, and at the same time, the DMA continues to process the data of the cache 1. In this way, since the CPU and the DMA do not process the same DMA buffer area, the conflict of resource access is reduced, and the audio data is not lost to the greatest extent, the real-time performance of the audio processing is improved, and the parallel capability of the system is also improved.


In this design, MapDMABuffers() function is used to realize the allocation of DMA audio data buffer area. The main function of the function is to allocate the DMA buffer area for receiving and transmitting audio data.

Conclusion


This paper analyzes the basic principle and driver model of the embedded system based on TSC2101 audio chip embedded in Windows CE operating system, and describes the implementation method and principle of DMA double buffer area with specific procedures. This design can meet the real-time requirements of the audio system in practical use. In the actual test, the buffer size is set to 0x1000 (Bytes), the bit clock frequency is 2.836MHz, and the data size of DMA data transmission is 32B, 16B, 8B respectively. In the case, the playback effect is clear and no noise, and the expected effect is achieved.

Adapter Ring

Adapter Ring,Lens Adapter Ring,Camera Adapter Ring,Filter Adapter Ring

SHAOXING COLORBEE PLASTIC CO.,LTD , https://www.fantaicolorbee.com