Even though we wish some of these tasks could be automated, it’s hard to do, and not just for technical reasons, but because it’s not to a standard. We keep trying to capture the algorithms that make us human, and it’s not a thing that’s really possible.
Creativity, logic leaps, beauty
What we can’t automate, even if we can simulate and remix things, is originality, creativity, leaps in logic. We can train a computer in what most people find beautiful, and it can remix the elements of beautifulness into something we find beautiful-ish, but the computer itself cannot perceive beauty. It’s just following the prompts and corpus that the designers have designated as likely to cause ‘beauty’.
We have trouble teaching systems to think in systems. It seems like being able to say that this concept is 29% likely to occur next to another concept is something like understanding systems, but since AI can’t actually understand concepts, it’s just going by which token occurs next to another token. Just because two people often end up in the same commuter train car doesn’t make them related, and token adjacency is the same kind of apparently-meaningful thing.
Logical leaps are the kind of thing that are in the eye of the beholder. Lewis Carroll’s Mad Hatter famously asked “Why is a raven like a writing desk?”. Many people, including Carroll, have answered the riddle, but also it is nonsense. All the answers I have found depend on wordplay or extended metaphor. And that’s because riddles rely on leaps of logic, even the nonsense ones.
Caring
It’s really hard to automate the human/machine interface, as anyone who has ever dealt with a phone tree knows. Humans are extremely variable, and machines and computers lack the flexibility of thought and analogy to handle that. Instead, humans learn a specific syntax and method of interacting with a machine interface. I bet you can identify the human side of an automated conversation when you hear it, because the person talking isn’t speaking their own language, they are speaking the Machine-ish version of it. Learning Machine-ish is a life skill, and we don’t even notice that we’re doing it.
That mostly works for humans who understand the machine’s goals, and want to cooperate with those goals. However, that’s a distinct subset of humanity.
We haven’t been able to automate care work, as desperately as we wish we could, because our machines aren’t smart enough, and because people don’t thrive. If you tried to create a diaper-changing robot, the baby would absolutely end up on the floor, because part of changing a diaper is being alert for the baby’s sudden lurch toward the edge, and part of it is cleaning up something sticky, and part of it is making it not-emotionally-traumatic, and part of it is getting the fastenings right by feel, and all of what you learn about one baby in one week may be entirely different the next week. There isn’t an algorithm that can handle that.
We also know that care work is physically and mentally exhausting, but that having an emotional connection between the caregiver and care recipient is important to everyone’s emotional health. When we try to create a robot that can do any caretaking, the first thing we do is try to endow it with friendliness, not a comprehensive understanding of the UV changes in skin that presage a bedsore (although come to think of it, that would be useful). As much as we use chatbots to “talk” to, most people far prefer to communicate vulnerable, difficult information to another human who can provide sympathy and empathy, not automated responses.
During the pandemic, many of us learned concepts like skin hunger, and the difference between in-person and televisual communication. Automating around that is a really high bar, and although there are some organizations trying for it, for the most part, automation of lovingkindness is a long way from where we are.
Things we don’t understand
We can’t automate what we don’t understand. Doing something manually is always an essential part of automating a process, and sometimes it’s hard for us to see all the parts and elements of a process that we want to automate.
Think of the common technical writing test to “write the instructions for making a peanut butter and jelly sandwich”. If we tell a human to “take a couple pieces of bread”, we can rely on a lot of pre-existing patterns and common sense. Your average human knows that a couple pieces means two, that they have to open the bread bag, and how to get past the tab or tie at the mouth of the bag, and will generally not take the heel of bread for a sandwich. All of that is something we would have to explain if we were automating something. But sandwiches are trivially complex compared to some of the things that we automate.
What automation is
Any automation is usually an approximation to start with, when we automate the parts that we know and understand, and wait to see what breaks so we can find the parts that we didn’t know about or understand before automation.
When an automation is fully finished… no it’s not, because something will almost always change eventually. Even automations we think of as very mature, like those car-welding robots we see in automotive factories, need to be reprogrammable. There will be new body models, or different makes of car.
If we have to understand something to automate it, and if the automation is always going to change, does it really make sense to automate it? Well, yes, because we don’t want to do it. Because the act of automating has taught us more about the process. Because even if we have to change some parts of the script, most of it stays intact.
Automation is not set and forget, as Mickey’s broom in the Magician’s Apprentice. It will save us work, but only if we are sure that we have set boundaries, limits, and expectations. Without that, we really will end up hunted down Cyberdine/Skynet Terminator robots.
When we seek to understand something so we can automate it, we learn it in different ways than we would if we were going to teach it to a human. For instance, I would teach a human to knead bread by touch, but a bread machine does it by humidity and resistance and other senses that I don’t have exact sensors for. There is always a layer between an expert who knows how to do something the human way, and the translation for the machine way of doing things. You can smell a peach, and say this is ripe, and then tell a machine to use its sensors for ethylene or whatever, but the machine doesn’t understand “ripe” until we tell it what the exemplar is.
We can’t automate without exemplars, and we can’t get to exemplars without experience, and getting it wrong a lot of times, and although it’s easy to wish we could automate getting things wrong so we didn’t have to experience it, that’s not how humans learn.

