您的位置:首页 > 移动开发 > Android开发

Android平台音频信号FFT的实现

2015-05-20 15:55 239 查看

转载请标明出处:/article/10904535.html

功能:加载本地SD卡中moveDsp文件夹中的音频文件(包括录音获取文件和MP3文件),播放实时FFT,绘制出信号的时域和频域波形。

设计步骤:
第一步:页面布局,编写录音工具类URecorder(设置录音属性)和接口IVoiceManager
public class URecorder implements IVoiceManager{
private static final String TAG = "URecorder";
private Context context = null;
private String path = null;
private MediaRecorder mRecorder = null;
public URecorder(Context context, String path) {
this.context = context;
this.path = path;
mRecorder = new MediaRecorder();
}

@Override
public boolean start() {
//设置音源为Micphone
mRecorder.setAudioSource(MediaRecorder.AudioSource.MIC);
//设置封装格式
mRecorder.setOutputFormat(MediaRecorder.OutputFormat.THREE_GPP);
mRecorder.setOutputFile(path);
//设置编码格式
mRecorder.setAudioEncoder(MediaRecorder.AudioEncoder.AMR_NB);
try {
mRecorder.prepare();
} catch (IOException e) {
Log.e(TAG, "prepare() failed");
}
//录音
mRecorder.start();
return false;
}
@Override
public boolean stop() {
mRecorder.stop();
mRecorder.release();
mRecorder = null;
return false;
}
}
public interface IVoiceManager {
public boolean start();
public boolean stop();
}
第二步:继承父类View,编写自定义绘图类VisualizerView(显示时域波形)和VisualizerFFTView(显示频域波形(频谱)),重写onDraw()方法
时域波形显示类:
public class VisualizerView extends View {
private byte[] mBytes;
private float[] mPoints;
private Rect mRect = new Rect();
private Paint mForePaint = new Paint();
public VisualizerView(Context context) {
super(context);
init();
}
/**
* 初始化
*/
private void init() {
mBytes = null;
mForePaint.setStrokeWidth(1f);
mForePaint.setAntiAlias(true);
mForePaint.setColor(Color.GREEN);
}
public void updateVisualizer(byte[] waveForm)
{
mBytes = waveForm;
invalidate();
}
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
if (mBytes == null)
{
return;
}
if (mPoints == null || mPoints.length < mBytes.length * 4)
{
mPoints = new float[mBytes.length * 4];
}
mRect.set(0, 0, getWidth(), getHeight());
//绘制波形
for (int i = 0; i < mBytes.length - 1; i++) {
mPoints[i * 4] = mRect.width() * i / (mBytes.length - 1);
mPoints[i * 4 + 1] = mRect.height() / 2
+ ((byte) (mBytes[i] + 128)) * (mRect.height() / 2) / 128;
mPoints[i * 4 + 2] = mRect.width() * (i + 1) / (mBytes.length - 1);
mPoints[i * 4 + 3] = mRect.height() / 2
+ ((byte) (mBytes[i + 1] + 128)) * (mRect.height() / 2) / 128;
}
canvas.drawLines(mPoints, mForePaint);
}
}
频谱显示类:
public class VisualizerFFTView extends View {
private byte[] mBytes;
private float[] mPoints;
private Rect mRect = new Rect();
private Paint mForePaint = new Paint();
private int mSpectrumNum = 48;
public VisualizerFFTView(Context context) {
super(context);
init();
}
/**
* 初始化
*/
private void init() {
mBytes = null;
mForePaint.setStrokeWidth(8f);
mForePaint.setAntiAlias(true);
mForePaint.setColor(Color.rgb(0, 128, 255));
}
public void updateVisualizer(byte[] fft)
{
byte[] model = new byte[fft.length / 2 + 1];
model[0] = (byte) Math.abs(fft[0]);
for (int i = 2, j = 1; j < mSpectrumNum;)
{
model[j] = (byte) Math.hypot(fft[i], fft[i + 1]);
i += 2;
j++;
}
mBytes = model;
invalidate();
}
@Override
protected void onDraw(Canvas canvas) {
super.onDraw(canvas);
if (mBytes == null)
{
return;
}
if (mPoints == null || mPoints.length < mBytes.length * 4)
{
mPoints = new float[mBytes.length * 4];
}
mRect.set(0, 0, getWidth(), getHeight());
//绘制频谱
final int baseX = mRect.width()/mSpectrumNum;
final int height = mRect.height();
for (int i = 0; i < mSpectrumNum ; i++)
{
if (mBytes[i] < 0)
{
mBytes[i] = 127;
}
final int xi = baseX*i + baseX/2;
mPoints[i * 4] = xi;
mPoints[i * 4 + 1] = height;
mPoints[i * 4 + 2] = xi;
mPoints[i * 4 + 3] = height - mBytes[i];
}
canvas.drawLines(mPoints, mForePaint);
}
}
第三步:通过URI新建一个MediaPlayer对象,其他方式当执行getAudioSessionId()时将为null
//uri = Uri.parse(AudioPath);  // 解析录音文件路径到uri
uri = Uri.parse(Mp3Path);  // 解析MP3文件路径到uri
mMedia = MediaPlayer.create(this, uri);  // 实例化mMedia对象,并通过uri将资源文件加载到该对象

调用Android SDK 2.3以上版本中一个工具包android.media.audiofx.Visualizer,程序需要做的就是实例化一个Visualizer对象,通过获得一个实例化的音频媒体类对象的SessionId,并设置该对象的需要转换的音乐内容长度和采样率。最后为visualizer设置监听器setDataCaptureListener(OnDataCaptureListener listener, rate, iswave, isfft),当采样得到的数据长度达到之前设置的内容长度后,将会触发两个函数,在这两个函数中即可分别得到音频信号的时域信号数据以及经过快速傅里叶变换(FFT)处理的频域信号数据,均为字节数组形式(即:byte[])。

mWaveView = new VisualizerView(this);   // 创建VisualizerView对象
mFFtView = new VisualizerFFTView(this);   // 创建VisualizerFFTView对象
final int maxCR = Visualizer.getMaxCaptureRate();   // 获取最大采样率
mVisualizer = new Visualizer(mMedia.getAudioSessionId());   // 实例化mVisualizer
mVisualizer.setCaptureSize(Visualizer.getCaptureSizeRange()[1]);   // 设置内容长度为1024
mVisualizer.setDataCaptureListener(
new Visualizer.OnDataCaptureListener()
{
public void onWaveFormDataCapture(Visualizer visualizer,
byte[] waveform, int samplingRate)
{
mWaveView.updateVisualizer(waveform);  // 更新时域波形数据
}
public void onFftDataCapture(Visualizer visualizer,
byte[] fft, int samplingRate)
{
mFFtView.updateVisualizer(fft);  // 更新频域波形数据
}
}, maxCR / 2, true, true);   // 采样速率为512MHz,设置同时获取时域、频域波形数据
需要注意的是:停止播放时,除了release播放类对象外,还要释放Visualizer对象。

音频信号FFT效果图:



源码下载链接:

https://github.com/vroy007/MoveDSP

http://download.csdn.net/detail/sctu_vroy/8720193
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: