OK, I think I might try to rewrite this thing. I'll have to dig out some of my old notes on linux programming though. So how would you guys want this to work? I think I'd want to be able to receive/send the audio streams from my frontend app (it's custom, and a work in progress), instead of having the handsfree prog talk directly to alsa. Of course, you could always run an instance of aplay and one of arecord to route the audio for you.
I was thinking of something like so:
Where the 4 things on the left side are pipes. audioIn and audioOut are self explanatory, cmdIn would be where you send commands to handsfree (answer, hangup, etc) and statusOut is where you get status (ringing, signal level, etc). Also, I was thinking instead of having to connect to your phone, you could connect FROM your phone, by having handsfree listen for the rfcomm instead of making it. Maybe handsfree could send something through statusOut when somebody tries to connect, and you can accept the connection through cmdIn?
audioOut <-- |-----------|
audioIn --> | handsfree | - - - rfcomm
cmdIn --> | | - - - sco
statusOut <-- |-----------|
Does this make sense to anybody else, or am I making this too complex?