Looking into the past for things we did before mobile this concept came up as a good project to push my knowledge. The main initial challenges were to combine user interactions with not only typical events but with audio feedback as well. Then to extend the same functionality into App driven events. But in the end the key focus was to make something creepy ;)
Early versions used multi-touch (multiple users on one device) the challenge was to write custom code that extended multi-touch to calculate the mid-point between all current touch events. Next was including audio in a way it could be attached dynamically to user events and after testing on multiple devices it worked fine. It opened up the idea for future Apps/Games being able to attach audio libraries for multiple languages with seamless switching between languages/characters. It also opened the possibility of recording audio and storing it on the device to switch with the default audio.
As the user base increased, I used feedback to improve performance and interactions. Then adding random automatic responses when the user interaction had stopped for a short time and user customization options. This data would be stored on the device and reloaded when started.