Android SurfaceView: can be used for media or direct access, but not for both.

That is, if you have an instance of SurfaceView, you can use it for video/camera, or use it for direct access as a buffer, but you cannot reuse it for media after you already accessed it through either ANativeWindow API or any other internal ways.

The reason is in ASOP code Surface.cpp. Surface connected to CPU if it once got locked, which is a necessary step to access its internal buffer. But it only got disconnected in  destructor. On other side, if you bind the Surface to media, it checks the connection status and return an error when it found it has already been connected to CPU.

So, if you need a SurfaceView for both purpose, you have to destroy the old, assuming it is for direct access, and create a new one for media.

So many traps there…

Simply, a .so compiled by android ndk r9 gcc 4.6 crashed on loading in function __check_for_sync8_kernelhelper.

After looking up the issue in Google, I found this issue has been reported to Google: https://code.google.com/p/android/issues/detail?id=58476. Unfortunately no solution at present.

Basically it is an libgcc issue depending on linux kernel version. Lower kernel version lacks the symbol that libgcc in 4.6 or higher needs. For example, the Galaxy Nexus I am just testing app on.

 

So,

1) Avoid using 64bit atomic operation built in gcc, if you can control everything in your codes.

2) Using gcc 4.4.3. For ndk r9 there is a legacy toolchain package in a separated download link.

Why so many latencies in BB10 audio playback? Why we always get underrun in audio data feeds?

They are two faces of one same issue.

If you started coding audio playback from reading PlayWav sample in BlackBerry’s github repository, you will find there are long latencies in audio playback, that is, about 5s after you fed data, you can hear the sound. But why? BB10 uses ALSA’s libasound as its audio API but the documentation and description are very little. So after many changes here and there, many trials and inspections, I got what controlled the latency.

snd_pcm_channel_params_t.buf.block.frag_max;

In PlayWav sample, this field is set to -1, which returned a large number in it from call of snd_pcm_plugin_params(). So we can set this field to a small number to reduce the latency. Indeed it is said that RIM recommends 5 but I don’t know whether it is true and I cannot remember where I found it.

Anyway, a number like 3 or 5 can exactly reduce latency to some small value so that our ears cannot find it. But another strange behavior occurs then: we got UNDERRUN frequently. At that time, if we set snd_pcm_channel_params_t.stop_mode to SND_PCM_STOP_STOP, the playing back stopped after a short time interval; if we set snd_pcm_channel_params_t.stop_mode to SND_PCM_STOP_ROLLOVER, the playback will repeats data in a few last buffers.

The latter issue is due to thread priority. QNX’s io-audio drivers runs playing back in very high priority so it is very easy to make your data UNDERRUN if you just run your data feed thread in a normal priority. In some discussion somebody recommends set data feed thread to 50 and in some other codes the value was set to 18 or so.

Conclusion: to make audio playback smoothly and easily, 1) set frag_max to 5 or another small value but too small may cause UNDERRUN issue. The field controlled data buffering – yes, it equals to audio latency on other side – in implementation of audio playbacks. 2) raise your data feed thread priority to some higher value. Normal thread runs on 10 and audio playback thread runs on a higher priority. Set to higher to avoid data UNDERRUN.