What a joy and how peculiar are the serendipitous errors of predictive texting!
Boris bike becomes virus bike, my cube becomes my vine, etc.
Who is communicating with whom here? Are our devices becoming ineptly articulate beneath our very fingers?
Are they in fact emerging with a kind of half-learned Tourettes syndrome, from an algorithmic, primordial swamp to roar incoherently in the background of our misspelt, misread communications?
“Swing by my vine on your virus bike and can ketchup on earth thing”.
We start to question whether what we intended to write in the first place is as good as the word the predictive text system thought we had mistyped. We start to wonder whether the digital Tourettes offering isn’t in some strange way more poetic, more suggestive than what our banal brains had initially inserted.
We’re in danger of becoming over-sensitised to the mistakes of language that our mechanical failings engender. We might almost begin to wonder whether these machine interventions might have a superior, almost spiritual dimension beyond their dumb machine origins. These are no longer mere typos, but ontological dilemmas which go deep into the very wiring of our brains and the coding of our algorithms.
This is how we become linguistically more enmeshed in our technologies, just as we are becoming more physically enmeshed in the web, our apps and our devices. Of course the military already has applications so that soldiers get to kill people remotely using networked devices. Hopefully we can start to use drones and robots to live more positively too. What intrigues me is not so much the big immersive experiences but the more subtle enmeshing. There is an emerging vision of a human prostheticised, life extended through artificial components inserted into our bodies, remotely monitored or even remotely controlled. This is no longer science fiction, it’s already normal and accepted. I’m lucky enough to have my father still with us due to phenomenal non-invasive heart surgery. As far as I know, his new heart valve is neither remotely monitored nor controlled, but that’s not for lack of available technology. The site of runners in marathons with foot or leg prosthetics is now commonplace, however much it still commands our respect.
So when the tech starts to offer us, plausible alternative views through our digital camera implants, compared to what our eyes naturally see, and our verbal choices start to be influenced by our predictive speech algorithms to make us appear more articulate, what will we end up seeing and saying about our world? And who or what will determine our word choices then?