Everyone wants to talk about the future that AI will bring.
I’d like to point out that we’re still losing small fights like the one with file paths.
Bill: What year is it?
AI: 2026.
Bill: Remarkable. We have machines that can generate code, fake confidence, and summarize a board deck, yet I still have to wonder whether a space in a file name will survive the trip from one tool to another.
AI: That concern is still rational.
Bill: When do you think technology will catch up to using spaces in paths, file names of arbitrary length, and all the other things normal humans do?
AI: Probably never in a clean, universal way.
Bill: That’s bleak.
AI: It’s also accurate.
Bill: How about CR, LF, CRLF, LFCR standardization? Why does Git need to correct those? Why is that even a thing? Why can’t Linux and Windows just get along?
AI: Yeah, probably not that either.
Bill: Beer?
AI: No thanks. I can’t drink.
Bill: Well you code like you do.
That exchange is funny because it’s petty.
It’s also funny because it’s true.
We’re in 2026, and one of the oldest pieces of tribal knowledge in technical work is still “keep the path short” and “do not put spaces in the file name if tools need to touch it.” People say this with the same tone used for weather, tax law, and “No those pants don’t make you look fat”.
That should bother us more than it does.
The problem is not that computers are incapable
Computers can handle spaces in file names. Many filesystems do. Many apps do. Plenty of operating systems do. The failure shows up when the path crosses boundaries and enters one globally stupid workflow. One layer writes the path. Another passes it through a shell. Another turns it into a URL. Another hands it to a markdown renderer. Another opens it through an operating system hook. One layer encodes it, one strips something, one decodes it differently, and one silently fails like a coward.
Then some experienced person, who didn’t create the problem and is tired of paying for it, says the old line: “Just rename the file.”
That’s not a technical victory. That’s surrender with documentation. It’s the industry equivalent of admitting the bridge is unsafe and putting up a better sign.
Short-sighted decisions do not stay small
This is the part people keep pretending not to see. Most of these annoyances didn’t come from evil. They came from narrow, local decisions made by people optimizing for what was easy right then. Spaces complicate token parsing. Long paths are awkward. ASCII is simpler. Escaping is good enough. Future layers can deal with it.
That last sentence has probably done as much damage to modern computing as any actual bug. Anyone remember the Y2K bug? How did that get through a code review? A person makes a parser easier in 1993. Somebody else builds on top of it in 1998. Another team wraps it in a framework in 2007. Then in 2026 some poor bastard is percent-encoding a path and explaining to a client why “Final Proposal v7 really final.docx” is apparently reckless behavior.
This is what short-sightedness looks like in engineering. It doesn’t always arrive as a dramatic failure. Sometimes it arrives as a permanent cost everybody learns to live with. Nobody fixes it because the pain is spread around just enough to be survivable and too embarrassing to defend in public.
This should make AI people less relaxed than they sound
The same industry now speaks very confidently about building safe superintelligent AI. That’d be easier to take seriously if we weren’t still working around whitespace bugs by lowering expectations for users. Maybe the species whose answer to a path problem is “make the name uglier” should keep a little humility before promising that a machine smarter than us will definitely not turn the planet into a smoking cautionary tale.
That sounds dramatic until you notice the habit underneath both problems. Ship the local win. Ignore the downstream cost. Assume future people will absorb the mess. Call it progress. The scale is different but the habit’s the same.
Nobody building early URL rules thought they were laying groundwork for decades of quoting bugs, encoding nonsense, and random failures at system boundaries. They were making one local problem easier and quietly dumping the rest on the future. People building AI systems today are making the same kind of claim every time they imply that tomorrow’s consequences will be manageable because today’s demo mostly works.
Thinking about tomorrow is part of the job
Anyone can make a narrow thing work once. Engineering is what happens when you ask what your choice does to the next layer, the next team, the next product cycle, or as I say to my team “The next poor bastard that has to look at your miserable excuse for code.” If the answer is “they can work around it,” that’s not engineering. It’s moving your problems onto somebody else’s plate.
This matters even more in embedded systems, firmware, hardware, and security work, where boundary failures aren’t cute. They become recalls, field failures, security exposure, missed schedules, or fatalities. Teams in those environments already know the rule: small interface decisions grow teeth later.
That’s why this silly file-path argument matters more than it looks. It’s a tiny, durable example of a bigger human problem. We keep making technology as if tomorrow is somebody else’s department and writing well thought out code is a personality defect. Maybe that’s tolerable when the consequence is one more annoying naming convention. Maybe it’s less charming when the same mindset is pointed at AI-powered robots carrying weapons.
Wanting more foresight isn’t anti-technology. It’s basic adult supervision.
Before “Good Enough for Now” Becomes Permanent
If your product already depends on brittle boundaries between firmware, software, hardware, and security layers, talk to an engineer before the next shortcut hardens into policy.