Human Input and AI

As humans, we need coherent things to build mental models around when we interact with interfaces and input. These “boxes” of interaction need to work and react predictably for us.

All interactions sit along a spectrum (1). On one end, there is conscious intention – when we reach out to move something or tap a button. On the other end, we have cases where AI and signal processing predict our intent in ways we don’t consciously understand.

The latter models are excellent at taking noisy aspects of ourselves and our world and synthesizing them into natural interaction. They can make inferences based on a wider gamut of signals than we normally have available to us.

The crux is how to package and compose these into localized, conscious interactions that further human agency.

Footnotes

1. There’s a spectrum here, albiet a bit of a diffuse one. Humans are adaptable. Also, what happens as systems make more choices for us and impact how we engage with it in subsequent interactions?