What can’t be automated?

Even though we wish some of this could be automated, it’s hard to do, and not just for technical reasons, but because it’s not to a standard. We keep trying to capture the algorithms that make us human, and it’s not a thing that’s really possible. But we can do a lot to make our world easier and less full of toil.

Creativity, logic leaps, beauty

Recently, there has been an explosion in publicly-available AI engines like MidJourney and ChatGPT. As is always true for new technologies, the advocates claim that a technology is world-changing, and the skeptics declare that it’s bunk, and the reality that most of us experience is that it’s somewhere closer to “National Geographic article about customs in different places.” It’s not something directly relevant to our experience, but we like knowing it’s out there.

What we can’t automate, even if we can simulate and remix things, is originality, creativity, leaps in logic. We can train a computer in what most people find beautiful, and it can remix the elements of beautifulness into something we find beautiful, but the computer itself cannot perceive beauty. It’s just following the prompts and corpus that the designers have designated as likely to cause ‘beauty’. 

As much as we’ve tried, we can’t automate inspiration and we have trouble teaching systems to think in systems. It’s really good at trying all the combinations of something. Protein-folding is something that humans can’t do at scale, or fast, and computers can. Thank you, computers, for not dying of boredom while you move a molecule one place every time.

Caring

It’s really hard to automate the human/machine interface, as anyone who has ever dealt with a phone tree knows. Humans are extremely variable, and machines and computers lack the flexibility of thought and analogy to handle that. Instead, humans learn a specific syntax and method of interacting with a machine interface. I bet you can identify the human side of an automated conversation when you hear it, because the person talking isn’t speaking their own language, they are speaking the Machine-ish version of it. Learning Machine-ish is a life skill, and we don’t even notice that we’re doing it.

That mostly works for humans who understand the machine’s goals, and want to cooperate with them. However, that’s a distinct subset of humanity. We haven’t been able to automate care work, as desperately as we wish we could, because our machines aren’t smart enough, and because people don’t thrive. If you tried to create a diaper-changing robot, the baby would absolutely end up on the floor, because part of changing a diaper is being alert for the baby’s sudden lurch toward the edge, and part of it is cleaning up something sticky, and part of it is making it not-emotionally-traumatic, and part of it is getting the fastenings right by feel, and all of what you learn about one baby in one week may be entirely different the next week. There isn’t an algorithm that can handle that. 

We also know that care work is physically and mentally exhausting, but that having an emotional connection between the caregiver and care recipient is important to everyone’s emotional health. When we try to create a robot that can do any caretaking, the first thing we do is try to endow it with friendliness, not a comprehensive understanding of the UV changes in skin that presage a bedsore (although come to think of it, that would be useful). As much as we use chatbots to “talk” to, most people far prefer to communicate vulnerable, difficult information to another human who can provide sympathy and empathy, not automated responses.

During the pandemic, many of us learned concepts like skin hunger, and the difference between in-person and televisual communication. Automating around that is a really high bar, and although there are some organizations trying for it, for the most part, automation of lovingkindness is a long way from where we are.

Things we don’t understand

We can’t automate what we don’t understand. Doing something manually is always an essential part of automating a process, and sometimes it’s hard for us to see all the parts and elements of a process that we want to automate. 

Think of the common technical writing test to “write the instructions for making a peanut butter and jelly sandwich”. If we tell a human to “take a couple pieces of bread”, we can rely on a lot of pre-existing patterns and common sense. Your average human knows that a couple pieces means two, that they have to open the bread bag, and how to get past the tab or tie at the mouth of the bag, and will generally not take the heel of bread for a sandwich. All of that is something we would have to explain if we were automating something. But sandwiches are trivially complex compared to some of the things that we automate.

Any automation is usually an approximation to start with, when we automate the parts that we know and understand, and wait to see what breaks so we can find the parts that we didn’t know about or understand before automation. 

When an automation is fully finished… no it’s not, because something will almost always change eventually. Even automations we think of as very mature, like those car-welding robots we see in automotive factories, need to be reprogrammable. There will be new body models, or different makes of car. 

If we have to understand something to automate it, and if the automation is always going to change, does it really make sense to automate it? Well, yes, because we don’t want to do it. Because the act of automating has taught us more about the process. Because even if we have to change some parts of the script, most of it stays intact. 

Automation is not set and forget, as Mickey’s broom in the Magician’s Apprentice. It will save us work, but only if we are sure that we have set boundaries, limits, and expectations. Without that, we really will end up hunted down Cyberdine/Skynet Terminator robots.

When we seek to understand something so we can automate it, we learn it in different ways than we would if we were going to teach it to a human. For instance, I would teach a human to knead bread by touch, but a bread machine does it by humidity and resistance and other senses that I don’t have exact sensors for. There is always a layer between an expert who knows how to do something the human way, and the translation for the machine way of doing things. You can smell a peach, and say this is ripe, and then tell a machine to use its sensors for ethylene or whatever, but the machine doesn’t understand “ripe” until we tell it what the exemplar is.

We can’t automate without exemplars, and we can’t get to exemplars without experience, and getting it wrong a lot of times, and although it’s easy to wish we could automate getting things wrong so we didn’t have to experience it, that’s not how humans learn.