Making a Bluetooth adapter for a Car Phone from the 90's

There is quite a comprehensive data sheet on RS Components, if that's any use?

I'm very familiar with that data sheet, and also the "evaluation board" documentation that provides some very sparse instructions on how to configure the module, update firmware, update EEPROM configuration data, etc. (it doesn't explain what anything means, what any options mean, why to do certain things; only gives bare minimum "happy path" instructions to perform a specific task).

There's been a couple problems along the way. The datasheet contradicts itself on whether the EAN pin is internally pulled up or down. This was critical for updating the firmware. There's also some confusion about which pin is the P2_4 pin. I found this blog post which helped me update the firmware properly, which involved grounding a "P2_4" pin that is actually not the P2_4 pin in the data sheet, but is a "NC" pin. There are two pins on the breadboard adapter labelled as "P2_4" (I got an explanation that the board was originally developed for the BM20 module, which has two P2_4 pins). The firmware flashing procedure also involves installing firmware labeled as for the BM64 (not the BM62), and using an update tool that is explicitly described in a README file as NOT for the BM62.
 
I solved my Bluetooth module infinite rebooting problem! Someone suggested I test for continuity between every adjacent pair of pins. I discovered that the "RST_N" (reset) and "AIL" (aux audio input: left) were shorted together by a small solder bridge.

Now I can successfully power on the Bluetooth module with my customized configuration, I can pair it with my phone, and I can use a desktop PC tool (with USB UART adapter) to simulate the UART communications between an MCU and the Bluetooth module!

I have successfully sent an outgoing call to my wife via UART commands to the Bluetooth module




I hope to have some basic proof of concept functioning within a week where I can dial a number and send a call from the car phone, and also answer incoming calls on the car phone... but without any audio yet. I still have quite a bit of work to do with both the hardware/electronics and programming to route audio between the car phone and Bluetooth module.
 
IT'S WORKING!!!
(well, in an early proof-of-concept kind of way)



I still need to do a ton of programming to "properly" handle various combinations of possibilities, but the basics are there.

I already have microphone audio connected up so I can talk into the car phone and the person on the other end of the call can hear me. I'll probably work on temporarily wiring up incoming audio to the handset (disconnect my current sound input from the microcontroller) just to work out details of audio configuration on the Bluetooth module, and whatever else I may need to do to get the audio to the right level.

My next major problem to solve is how to programmatically switch between audio sources. I will need to switch between the microcontroller and the Bluetooth module as the source of the sound going to the handset, and I'll also eventually need to switch between the handset microphone and an external "hands free" microphone for outgoing audio to the Bluetooth module.

I've started reading about both solid state relays (SSR) and analog switch ICs. I'll basically need two DPST switches that I can control with the microcontroller. Analog switch ICs seem readily available in configurations I need. Only potential problem is that I need to organize my circuitry so that the switched audio signals are in the 0-5v range (rail-to-rail) at the switches, so I can't use a decoupling capacitor until after the switch. SSRs don't seem to have this limitation. Is there anything else I should consider?
 
Last edited:
so that the switched audio signals are in the 0-5v range (rail-to-rail) at the switches, so I can't use a decoupling capacitor until after the switch.
The best way I've found for "click free" audio switching is to have every connection to the switches capacitor coupled, with every switch connection also having a high value bias resistor back to a common (and well decoupled) mid-supply bias - or 0V bias, if using dual-supply switches.

The same idea works equally well with mechanical switches / relays to avoid clicks.


It's looking good!
 

(note: I corrected my "decoupling" to "coupling" in my previous post)

Those words are a bit beyond my level of understanding, but I have seen some audio circuit diagrams before with arrangements of capacitors that I didn't understand, and mentions of bias voltages. Do you happen to know of a good resource for an overview of these concepts I could read so I can comprehend your post?
 
Biasing electronics is typically using resistor circuit to set the average DC levels appropriately, so a circuit has the best operating range or minimal distortion etc.

The mechanical equivalent would be something like properly centering a moving or oscillating part in its frame, so it was not hitting a limit in normal operation.
eg. Like adjusting an old pendulum wall clock for a symmetrical "tick" - the side to side tilt sets the pendulum centre bias.

This is a rough example of the switch bias concept I mentioned above:

All the capacitor to switch junctions are kept charged to around half the analog supply, by the 100K resistors.
That means there is no sudden voltage "jump" when any switch closes or opens, so no (or at least minimal) clicks or thumps added to the audio.

The values given are just possible examples, for the diagram.

 
rjenkinsgb I think I'm beginning to wrap my head around this.

If I understand this correctly, then in addition to this biasing approach reducing clicks/pops, it also has the benefit of "normalizing" all the switched audio inputs (each could have been initially biased around different voltage levels), and also conveniently brings them all within the "rail-to-rail" voltage (in my case, 0-5V range) so I can use an analog switch IC.

This whole topic seems somewhat related to something I did in my tone generating code to minimize "popping" when stopping/starting tone output and changing tones. I always guarantee that my DAC output comes to rest at the mid-point, and never allow any sudden jumps in output:
  • Mid-point DAC output is the initial "idle" state.
  • When starting to play a tone, I always start at the beginning of the sine wave table (which corresponds to mid-point DAC output).
  • When changing from one tone directly to another tone, I continue from the current position within the sine wave table and only change the rate at which I'm advancing through the table from that point forward (no sudden jump in voltage; only a sudden change in rate-of-change of voltage).
  • When I stop playing a tone, I allow the current sine wave to run to completion, and come to rest at the mid-point.
  • There's still a bit of "harshness", probably because of how the tones suddenly start/stop at the point of the sine wave with the highest rate of change. Starting/stopping at the peak/trough of a wave would be "smoother", but would initially create some abnormally high/low voltages to the audio output when first starting a tone, because the coupling capacitor would have "centered" 0V output to that peak/trough input voltage level, and it takes some amount of time to find it's new "center" after the tone starts playing.
  • I think the ultimate in smooth start/stop of tone output would require a very brief "ramp up" and "decay" of the tone amplitude, just like how sound happens in the real world, but that's more than I want to deal with .
 
Last edited:
Exactly - same concepts, keeping everything mid-range & avoiding jumps in level.

You can use rather smaller coupling caps for just voice frequencies, than in the example - that would pass down to a few Hz and take a couple of seconds to get near settled.

Somewhere from 0.1 to 0.47uF should be OK for voice range, as you don't need a particularly low cutoff and that will settle to the steady state voltages much faster.
 
A more 'fool proof' and effective method is to mute the output briefly as you switch inputs, but it depends entirely on how the switching is done, and if you can trigger a brief mute as you switch.

A good example of this is the old Armstrong 600 series amplifiers, long predating CMOS analogue switches they used diode switching, and the four capacitors C449 - C452 gave the 'muting' effect as the previous selection faded out, as the new selection faded in. Something else to consider? - and modernise - similar techniques have been used for more modern CMOS switches.


While the biasing solution cures the DC 'click' as you change, it still leaves the AC 'click', as the channels could be at wildly different points of their AC cycles. But it could certainly make it far more acceptable.
 
The mute is a nice idea; that could simply be done with another analog switch section, between the source selector output and the bias source.

Or a brief fade out and in using a FET to clamp the output to the point, with an R-C network to ramp the muse control rather than an instant switch?

The circuits in diagrams 2 or 4A look as if they could work for a fade:
The TLP222 is still readily available and appears to be the simplest for a clamp type soft mute
 
Thanks again for all the info/ideas!

To start with, I think the simple DC bias approach will be sufficient for the primary purpose of normalizing the audio inputs into a voltage range that I know for sure I can use with an analog switch IC. AC "clicks" are likely a relative non-issue from a sound quality perspective due to when switching will occur.

On the microphone side of things, the switching will occur when picking up the handset out of its cradle, or snapping it back into its cradle. There's already going to be some unavoidable "mechanical" sound of the clatter and loud "snapping" sound (spring-loaded plastic clip) which will overpower any "click" in the audio caused by switching mic sources.

On the "incoming audio" side, a switch will happen when a button is pressed/released during conversation, or some audible notification is generated by my MCU during conversation. It's already going to be an abrupt/harsh switch between "organic" conversation audio and "artificial" pure tone, then back when the tone stops, so an AC "click" at this point may not bee too noticeable/obnoxious (assuming it's not excessive).

However, I have noticed a few times that a harsh "pop" in the audio corresponded with corruption on the handset display, so it seems that sudden jumps in analog audio voltage can create enough interference on the digital UART wires to corrupt the data. So I'll definitely have to keep an eye on things and determine whether I need to do more to solve this by either avoiding the AC "clicks", and/or figure out if there's other general solutions to better isolate/shield the analog and digital circuits from each other that I should be using.
 
It's looking like incoming audio from the Bluetooth module to the car phone handset is going to be a challenge. I got the Bluetooth module configured to output audio correctly for my needs:
  • mono audio
  • gain set to the right level for max volume for the handset (will eventually pass through my digital potentiometer for volume control)
When I scope the audio output of the Bluetooth module directly, it looks pretty decent. Some noise, but not a lot. Peak-to-peak voltage in the desired range (about 1V peak-to-peak max; what I'm aiming for as max volume input to the handset).

However, when I connect the Bluetooth audio output to the handset audio input, there's suddenly a lot of noise:
  • Constant background messy "buzzing" noise (way too loud to be acceptable for even a personal prototype project on a breadboard).
  • Additional "cleaner" sounding buzzing noise corresponding to when LEDs are on/blinking on the Bluetooth module breakout board (assuming this is noise from PWM-based LED brightness control?).
  • Other "digital" sounding noise, that I assume is related to the Bluetooth module.

The good news, though, is that voice comes through the call to the car phone handset at an appropriate volume and clarity (ignoring the substantial noise).

I'm guessing this is all related to noise on the power/ground rails and wires on my sloppy breadboard circuit. If I connect the "ground" for the handset audio (technically the secondary/negative diff audio input) to a grounding point nearer to the Bluetooth module, then the noise is reduced quite a bit (especially the constant background noise), but still not to an acceptable level.

Are there any techniques for reducing noise like this on a breadboard? I'm aware that one thing that helps with noise is a "ground plane" and/or much thicker ground and power traces on a printed circuit board, but how can a similar effect be achieved with breadboards?

I'm also wondering whether I should prioritize battling this noise in general (which I'm not really sure how to do), or should I first focus on generating differential audio output to the handset to see how much noise that takes care of?
 
If the level from the bluetooth module is adjustable, increase it then attenuate at the handset connection; that should reduce the noise in proportion.

What happens if you use the other handset audio wire as signal? eg. exchange them.

You could try soldering pins (cutoff component leads) to a short piece of really heavy bare wire wire and plugging that in to the breadboard ground bus, to beef it up a bit??

Using a star point earth setup may also help.
 
Using a star point earth setup may also help.

Thanks. This "star point" terminology helped me find this article that explains some grounding concepts that I can learn from (the article is about PCB layout of ground, but there's still some concepts I can apply to a breadboard circuit). I think it's time for me to reorganize my circuit with special attention given to avoiding daisy chained ground/power rails and try to avoid sharing ground rails between digital and analog components. My current layout is fairly haphazard based mostly on the order in which I added components.

One particular problem might be how the Bluetooth module is connected to the rest of my circuit. It's currently a separate breadboard with its ground rail connected to the ground rail of the "main" board through a transistor that my MCU uses to control when the Bluetooth module powers on (so the MCU can ensure that it does not receive any Bluetooth events until it is ready). That was my quick/sloppy solution. I should probably connect power and ground directly to the Bluetooth module board and make use of the of the reset pin for controlling when it powers on.

I'll also try the other simple suggestions: increase volume level of Bluetooth module output, and try swapping signal/ground audio inputs to the handset. Beefing up ground rails with thicker copper wire will be an option if the noise is still unacceptable.
 
After making some fairly quick and easy changes to my circuit (like running audio ground from the Bluetooth module directly back to near the ground connection with the handset), noise levels are much lower. It is still horrendous by consumer electronics standards, but it's good enough for me to continue developing and testing without going insane and feeling like all hope is lost

Interesting new observation: The occasional corruption to digital handset communications can be reliably reproduced while both differential audio inputs are grounded, so it was not my audio signal from the MCU directly causing any sudden voltage jumps, audio pops, and interference. The interference/corruption happens when turning the "master" audio of the handset on (presumable turning on audio amplification circuitry?).

I have confirmed that I am turning on the master audio and individual speakers in the same order and with the same timing as the original phone. The loud audio pop and corruption also only happens if the master audio has been turned off for a while (10-15 seconds). With shorter periods of "off" time, there is very little "pop" and no corruption. So it likely has something to do with capacitors in the handset's audio circuit taking time to charge/discharge?

I looked back at my photos of my scope when probing the original phone's audio signals, and it is definitely a "coupled" signal (centered around 0v), so why would a constant 0v signal at the time of powering on the main audio circuit cause a loud pop? I wonder if I need to have audio input to the handset completely disconnected at the time of powering on the main audio?

What I've done for now as a temporary workaround is just send a sacrificial "null" command immediately after enabling master audio, which is luckily harmless when it gets corrupted (quicker/lazier solution than forcing a delay after the master audio on command).
 
Quick update: I now have voice audio working in both directions, with volume control!

Here's a demo of an outgoing call to a number that plays a recording:


There's still substantial electronic interference noise while in a call, but I also haven't really put much effort into reorganizing my circuit layout yet.

I'm using the MAX4619CPE Analog Switch IC (3x SPDT) with a circuit similar to what rjenkinsgb suggested for normalizing/biasing the signals (click here). I'm only using 1 of the 3 switches for now to choose between incoming call audio or MCU-generated tones (button beeps, etc.) to be passed through the digital pot for volume control, then sent to the handset. I will eventually have a need for the other 2 switches as well. I used 0.1 uF caps for the inputs to the switching circuit, and 0.01 uF for the output to the digital pot. When I used 0.1 uF for the output as well, I got some strange unwanted echoing/fading "ping" sound if I pressed a button (causing a beep) while I had the volume turned "off" (audio signal input to the digital pot completely disconnected from the internal resistor network).


I also added support for choosing from multiple ringtones. Here's a demo of changing/previewing the ringtones, then a real incoming call causing the selected ringtone to be used. This also shows off the new "call disconnected" sound I added when the call is ended from the other side of the call.

 

I think this was also related to how my code was always reverting to a "default" volume mode whenever a sound stopped playing. It would start with the digital pot fully connected at the current volume level for conversation by default. When pressing a button with "button press volume" set to "off", it would send the command to the digital pot to disconnect the input pin and begin producing the button sound. I think the very slightest beginning of the sound was output before the digital pot finished disconnecting its input, and then the 0.1 uF cap was somehow contributing to some kind of resonance/oscillation that caused that blip of sound to repeat/echo several times while fading out (this is getting into stuff I don't understand)?

So the 0.01 uF cap solved the echoing "ping" sound somehow (random luck that I tried it). But I also improved my code to never "default" to a potentiometer/volume setting when not "in use" It stays at whatever setting it was at last until there is a need for it to be changed. This avoids unnecessary changes/disconnects/reconnects, and seems to have reduced audio "clicking" a bit when a sound suddenly stops.
 
Last edited:
The audio "ping"/"echo" and 0.1 uF vs 0.01 uF caps in my previous two posts all turned out to be caused by me failing to carry through the concept of "click reduction" via caps and biased signals to my digital pot, so there were still sudden large voltage jumps happening. I basically have to treat my digital pot as another "switch" in the circuit where I need to bias+couple the signal before/after the digital pot (because it can disconnect when volume is "off", and even when changing its wiper position, I think it's technically very quickly switching between two points in the resister network?).

I still don't 100% grasp it, but after I applied the pattern to my digital pot in addition to my analog switch IC, I'm able to consistently use 0.1 uF caps throughout, I get no "clicking" when interrupting voice audio with button press beeps, and no strange echoing "ping" sounds if I press a button while volume level is "off"!

While scoping the signal to investigate the "clicks", I also discovered that my generated tone waveforms were fairly distorted with my previous circuit (troughs of waves were a bit "flattened" and didn't go as far negative as the peaks were positive). My tone waveforms are now very nicely shaped. The peak-to-peak voltage of my generated tones also got about 50% higher with my new circuit, so I had to run that audio signal through a voltage divider to get it back down to my ~1V peak-peak target for max volume level.
 
Cookies are required to use this site. You must accept them to continue using the site. Learn more…