Continue to Site

Welcome to our site!

Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

  • Welcome to our site! Electro Tech is an online community (with over 170,000 members) who enjoy talking about and building electronic circuits, projects and gadgets. To participate you need to register. Registration is free. Click here to register now.

Audio Codec DSP or MCU to implement digital filters with no latency.

Status
Not open for further replies.

Dr.VPot

Member
Hi Guys,
Firstly I thank all the members of the forum for being supportive on my other threads, especially John(AG).
Now i'm looking to introduce digital domain into my stethoscope project. I see lot of articles recommending a low power audiocodec hooked up to an MCU to apply digital filters in stethoscope application. For example TI used to manufacture Medical development kit for stethoscope application on TMS320C5515. see link below. check fir digital filter section.
https://www.ti.com/lit/an/sprab38a/sprab38a.pdf
they have implemented FIR hamming window band pass filter with order of 161.
I would also like to implement this filter or a bi-quad digital filter, so that i also have control on volume(instead analog pots). So I've following questions.
1)are those filter implementations real time, without latency issues?
2) do I really require a separate MCU or just a audiocodec with miniDSP block and user programmable filters is enough to run those filters in realtime without latency issues?
for example TLV320AIC3111.

Thanks in Advance.
 
The anti-aliasing filters in that app-note have a center-frequency of 2.5 KHz, and the sampling frequency is 12KHz.
The DSP used can go much faster, and that's how they are able to do a 161-point FIR between samples. There are no latency issues here.
An IIR seems attractive for its simplicity, but you WILL eventually run into instabilities (usually, in my case at least, when you get greedy.)

As to the need for a separate MCU: I think yes: you will be doing constant register-loading to the DSP, and the returned signal is I2S format, and differential to boot.
 
The heartbeat waveforms shown in the Texas Instruments article show pulses of a fluttering low frequency that do not resemble the pulses of heartbeats. Maybe that is why they say, "Warning, do not use this evaluation kit for medical diagnosis".
Look at this explanation that I found:
 

Attachments

  • phonocardiogram.png
    phonocardiogram.png
    334.3 KB · Views: 131
Hi Guys,
Firstly I thank all the members of the forum for being supportive on my other threads, especially John(AG).
Now i'm looking to introduce digital domain into my stethoscope project. I see lot of articles recommending a low power audiocodec hooked up to an MCU to apply digital filters in stethoscope application. For example TI used to manufacture Medical development kit for stethoscope application on TMS320C5515. see link below. check fir digital filter section.
https://www.ti.com/lit/an/sprab38a/sprab38a.pdf
they have implemented FIR hamming window band pass filter with order of 161.
I would also like to implement this filter or a bi-quad digital filter, so that i also have control on volume(instead analog pots). So I've following questions.
1)are those filter implementations real time, without latency issues?
2) do I really require a separate MCU or just a audiocodec with miniDSP block and user programmable filters is enough to run those filters in realtime without latency issues?
for example TLV320AIC3111.

Thanks in Advance.

Real-time in computing might not mean what it seems to mean. Real-time just means that the thing is implemented in a way (in the hardware and in the structure of the code) such that next result will always be available by the deadline when it is needed, no later. Deadlines are strict and cannot be missed. It's not about the result being available instantly. It's not inherent to the filter. Anything can be made to not be real-time given a sufficiently poor implementation or a sufficiently short deadline. So to know if something is real-time or not, you first have to figure out what your deadline is. It could be 10us or 1ms. It depends on what you need. For audio, it's probably dependent on your bandwidth since you want the next filtered sample to be ready when you are ready to update the DAC (assuming you are outputting the filtered audio), or perhaps you want the processor to be finished with the result before the next input sample arrives.

You don't just get a chunk of "real-time code" for a filter and drop it into your program and have it be real-time. It's more about how the code and everything around it works together to not have you miss any deadlines.
 
Last edited:
The anti-aliasing filters in that app-note have a center-frequency of 2.5 KHz, and the sampling frequency is 12KHz.
The DSP used can go much faster, and that's how they are able to do a 161-point FIR between samples. There are no latency issues here.
An IIR seems attractive for its simplicity, but you WILL eventually run into instabilities (usually, in my case at least, when you get greedy.)

As to the need for a separate MCU: I think yes: you will be doing constant register-loading to the DSP, and the returned signal is I2S format, and differential to boot.
So I spoke to TI experts they suggested me to use c5545 booster pack dsp. And i'm planning atleast 40k sampling rate.
 
The heartbeat waveforms shown in the Texas Instruments article show pulses of a fluttering low frequency that do not resemble the pulses of heartbeats. Maybe that is why they say, "Warning, do not use this evaluation kit for medical diagnosis".
Look at this explanation that I found:
AG,
Lol I didn't see the warning. It's funny though.
May be they didn't record a heart sound, may be they generated a signal with some noise to test the filters in that frequency range.
 
Real-time in computing might not mean what it seems to mean. Real-time just means that the thing is implemented in a way (in the hardware and in the structure of the code) such that next result will always be available by the deadline when it is needed, no later. Deadlines are strict and cannot be missed. It's not about the result being available instantly. It's not inherent to the filter. Anything can be made to not be real-time given a sufficiently poor implementation or a sufficiently short deadline. So to know if something is real-time or not, you first have to figure out what your deadline is. It could be 10us or 1ms. It depends on what you need. For audio, it's probably dependent on your bandwidth since you want the next filtered sample to be ready when you are ready to update the DAC (assuming you are outputting the filtered audio), or perhaps you want the processor to be finished with the result before the next input sample arrives.

You don't just get a chunk of "real-time code" for a filter and drop it into your program and have it be real-time. It's more about how the code and everything around it works together to not have you miss any deadlines.
Hi dknguyen,
May be 1ms is good enough. All I want is when a doctor listens to heart sound with the developed stethoscope, there shouldn't be a lag in heart sounds that is noticeable.
 
Hi dknguyen,
May be 1ms is good enough. All I want is when a doctor listens to heart sound with the developed stethoscope, there shouldn't be a lag in heart sounds that is noticeable.
If we're talking about noticeable delay than I think you could get away with up to 10ms. But the time between processing samples is probably going to be your bottleneck since you are probably sampling, running the FIR filter and then outputting a value to the DAC to play the filtered audio. So if you're going 44.1kHz like that used for standard CD audio then your latency would have to be 23us (your deadline) if you want to finish processing and outputting a sample before processing the next input. For something like a heartbeat, you're probably only interested in lower frequencies so you could probably increase this number a bit since your output bandwidth would be lower to ease your processing requirements, though your input bandwidth would probably near 44.1kHz since you don't want aliasing. Instead of processing 1 input sample for every output sample, you would process multiple new input samples per output sample.
 
Stepping outside of my comfort zone here, but this is what I remember...
If you are looking for frequencies 0-4KHz and you sample at 40KHz, an IIR filter becomes an arithmetic monster: the coefficients have a large range and it's easy to get oscillation.
An FIR filter, which cannot oscillate, merely becomes huge.

The schematic in the TI app-note shows an anti-alias filter at 4KHz.

There are filter-calculation programs on-line.
 
Stepping outside of my comfort zone here, but this is what I remember...
If you are looking for frequencies 0-4KHz and you sample at 40KHz, an IIR filter becomes an arithmetic monster: the coefficients have a large range and it's easy to get oscillation.
An FIR filter, which cannot oscillate, merely becomes huge.

The schematic in the TI app-note shows an anti-alias filter at 4KHz.

There are filter-calculation programs on-line.
well that still be a challenge for a high performance dsp?
 
Many moons ago, I made an experiment:
I took a pulse from a signal generator and fed it to an amplifier which in turn fed it to the right side of the headphone.
The same generator pulse, I would then first pass thru a bucket brigade delay BBD device, feed it to another amplifier, and then to the left side of the headphone.

By varying the BBD's clock frequency, I could vary the delay between the pair of pulses one was listening on the headphones.

If my memory serves me well, you require a delay in excess of 11 milliseconds before the brain can actually notice two distinct pulses. Don't recall the exact amount of milliseconds, but it definitively was in that range.
 
Status
Not open for further replies.

Latest threads

New Articles From Microcontroller Tips

Back
Top