Skip to main content

OpenSource is still art

The Craft

There's a persistent myth that software engineering is engineering in the same way that civil engineering is engineering. That writing software is a disciplined application of mathematical principles to produce predictable, reliable outcomes. That if we just had the right process and the right tools, we could turn software development into something as rigorous and repeatable as calculating the load-bearing capacity of a steel beam.

I've been writing software professionally over two decades, and as a hobby for twice that long. The persistent myth is wrong.

Software engineering is a craft. It always has been. It has far more in common with blacksmithing or carpentry, or more precisely with industrial art and design, than it does with civil engineering or architecture.

Open source is art

A few years ago, when I was still at NearForm, our founder Cian Ó Maidín, Matteo Collina, and I spent a lot of time talking about the nature of open source software development. What we kept coming back to was that the best way to understand it wasn't through the lens of engineering at all. It was through the lens of art.

We called the idea "Open Source is Art." We even had a bunch of extremely popular t-shirts designed.

A software developer, particularly an open source developer, is fundamentally an artist. They have some idea, some concept they can see in their head, and the act of writing software is the act of giving that vision physical form. The code is the medium. The running program is the finished piece. The craft is what bridges the gap between the idea and the artifact.

I don't mean this as a metaphor. When a developer sits down to build something new, they aren't following a blueprint handed to them by an architect the way a construction crew follows structural drawings. They're making creative decisions about structure, about form, about what to include and what to leave out, about how the thing should feel to use. They're shaping material into something that didn't exist before, guided by taste and judgment and experience. The fact that some people do it out of love for the experience and others do it because they are paid doesn't change the equation.

Think about how open source projects actually come into being. Someone has an itch. They see a problem, or they imagine a better way to do something, and they start building. No requirements document, no specification committee. Just an individual with a vision and the skill to realize it. Node.js, Fastify, Linux all started that way. An artist with a concept in their head and the craft to make it real.

Engineering vs. craft

Civil engineers work from codified bodies of knowledge. There are tables and formulas for how much weight a given cross-section of steel can support, along with building codes, safety factors, and well-understood failure modes. An architect can draw a building and, with high confidence, predict whether it will stand. The math works and the physics is settled. Yes, there is an artistic element but it is secondary to the engineering.

Software reverses that. There's no 100% reliable way to predict whether a given design will work until you build it, few universal safety factors, and decades of accumulated "best practices" that regularly contradict each other and go in and out of fashion like hemlines. I've been in this industry long enough to watch entire paradigms come and go and get reinvented again for another go around. Software estimation is still a running joke among practitioners. We can't even agree on whether inheritance is good or bad. Semicolons? Whitespace selection?

What we do have is a deep tradition of apprenticeship, of learning by doing, of developing judgment and intuition through years of practice. A senior developer doesn't write better code because they know more formulas. They write better code because they've seen more failure modes. They've developed an instinct for where complexity hides, where abstractions leak, where the next bug is likely to live. That's craft knowledge, the kind that lives in your hands and your gut, not in a textbook.

A master carpenter doesn't consult a formula to know that a particular joint needs reinforcement. They know because they've made and broken a thousand joints. They can feel when the wood is right. A master blacksmith doesn't calculate the precise temperature to start working the steel. They read the color of the metal. A sculptor doesn't measure the proportions of a face with calipers. They've internalized what right looks like through years of studying and shaping and starting over, through long apprenticeship under someone who's already learned the hard lessons.

Software is the same. The best developers I've worked with over the years don't succeed because they have better theoretical knowledge. They succeed because they've built things and broken things and debugged things at 3am and inherited someone else's nightmare codebase and lived to tell the tale. The knowledge that actually matters in this field is experiential, contextual, and stubbornly resistant to formalization.

What happens when the machines arrive

None of this is new, by the way. Crafts have been confronted with machinery that can do the work faster and cheaper for centuries now.

When CNC machines arrived in machine shops, they didn't eliminate machinists, but they fundamentally changed what it meant to be one. Before CNC, a skilled machinist was valued primarily for their ability to manually produce precise parts, to read blueprints, set up the lathe, and cut metal to within thousandths of an inch using hand wheels and their own judgment. That skill took years to develop.

CNC machines automated the cutting. A computer could drive the tool paths with more consistency and precision than all but the most skilled human hands. Physically producing the part, the thing that had defined the craft for generations, was suddenly something a machine could do.

What actually happened is that the machinist's role shifted. They spent less time turning hand wheels and more time programming the machines, selecting the right tooling, optimizing feed rates, designing fixtures, and solving the problems that the machines couldn't solve on their own. The craft knowledge was still essential. It just expressed itself differently. The tool changed what the machinist did with their day but not what made someone a good machinist.

The same pattern played out across every craft that encountered industrial automation. Power tools replaced hand tools in carpentry, but carpenters didn't disappear. The ones who lasted were the ones who understood wood, who could design joints that worked, who could look at a project and see the problems before they materialized.

Fabrication shops today are full of equipment that would be unrecognizable to a blacksmith from a century ago. Laser cutters, plasma tables, hydraulic presses, robotic welders. But the people who run those shops still need to understand metallurgy and know how materials behave under stress and have the judgment to decide when the machine's output is good enough and when it isn't.

The pattern holds in art as well. The camera didn't kill painting, and synthesizers didn't kill musicianship, and desktop publishing didn't kill graphic design. Each time, a tool arrived that could mechanically reproduce some aspect of what the artist did. It changed the work but didn't replace the artist, because the art was never just the mechanical act of production.

AI is a tool

I keep coming back to this because it seems like it needs to be said more often than it should: AI code generation is a tool. That's all it is.

A power saw cannot build a house on its own. A chisel cannot sculpt a statue on its own. These are obvious statements, but somehow when we talk about AI, the obvious gets lost in the noise. AI cannot write software on its own. It can produce code. Those are different things, and I say that as someone who uses these tools daily.

Whether AI is the right tool for a given situation depends entirely on context, and context always differs. It can implement known patterns, generate boilerplate, translate between languages, scaffold modules. Sometimes that's useful. Sometimes it's the wrong tool entirely. Most of the time the answer is somewhere in between, where it can do part of the work but only if a human with real craft knowledge is guiding it, reviewing its output, and making the decisions that the tool itself is incapable of making.

A tool doesn't have vision. It doesn't understand the problem. It can't look at a system and decide what the right abstraction is, or weigh the tradeoffs between simplicity now and flexibility later, or sense when a design is getting too clever. It can't see the finished piece in its head before it exists and work backward from that to figure out what needs to be built. The tool requires the craftsperson, the artist.

What changes for the individual

If you're a software developer, the practical effect is that the nature of the daily work shifts. The parts of the work that involve translating well-understood requirements into well-understood code are the parts where the tool is most applicable. The parts that involve judgment, taste, vision, system-level thinking, and navigating ambiguity are the parts where it isn't. Both parts still exist. The ratio of time spent on each is what changes.

Personally, I find some of it useful and some of it annoying. That's about as interesting a statement as saying I find some power tools useful and some annoying. It's a tool. The tool doesn't care what I think about it. I can choose to use it or not.

The economic reality

There's a reality that's not worth sugarcoating.

When CNC and automated fabrication became widespread, the number of people employed in machine shops didn't stay the same. Fewer machinists were needed to produce the same volume of output. Some shops got bigger. Many small shops closed or consolidated. The robots-taking-jobs fear that's haunted factory workers for decades wasn't irrational. It was a reasonable response to an observable reality. More efficient tools mean fewer people needed to do the same work. The equation hasn't changed in two hundred years and it doesn't change now.

We should expect the same dynamics in software. Organizations will need fewer developers to produce the same volume of output for routine work. The economics are too compelling for it to go any other way. People will lose jobs. It happened in manufacturing, in agriculture, in printing. It will happen in software. Pretending otherwise helps no one.

This is the terrible downside of technological progress. The standard consolation is that new tools have historically created new categories of work even as they eliminated old ones. The industrial revolution destroyed the cottage weaving industry and created the textile engineering industry. CNC eliminated many manual machining jobs and created CNC programming, tooling design, and automated manufacturing roles. The equilibrium eventually re-established itself.

But AI is different from every previous tool shift in one important way. If the new categories of work that AI creates can also be subsumed by AI, then the historical safety valve stops working. CNC couldn't do CNC programming. The power loom couldn't design textiles. The new jobs those tools created were safe from the tools themselves. There's no guarantee that's true with AI. If AI can automate the routine coding work, and then also automate the work of managing and reviewing and directing the AI that does the routine coding work, then the equilibrium point keeps getting pushed out. Maybe it never arrives. I don't know. But I'm not comfortable with the assumption that it will just because it always has before, and I don't think anyone else should be either.

Economic disparity increases when those who control the tools seek only to maximize value for themselves at the expense of the people whose labor the tools are replacing. This is the oldest pattern in industrial history. When the power loom arrived, the mill owners got rich and the hand weavers starved. When mechanized agriculture arrived, the landowners consolidated and the farmhands were displaced. The technology itself was neutral. The distribution of its benefits was not (is still not).

The same risk exists with AI, and the recursive nature of the tool makes it worse. If AI can subsume not just the current jobs but also the new jobs that emerge in response, then the people displaced have fewer and fewer places to go. The efficiency gains flow to the organizations that deploy the tools, in the form of reduced headcount, lower labor costs, and higher margins, while the people whose work is being automated bear all the cost of the transition. That's a power problem, not a technology problem, and it's one that every generation of tooling has created. We should be clear-eyed about it rather than hand-waving it away with optimistic talk about "upskilling" and "new opportunities" that may themselves be temporary.

The tool and those who built it

The exploitation problem doesn't stop at economics. It extends to how the tools themselves were built, and this one is personal to me as someone who has spent years contributing to open source.

AI models as they exist today have been built on a foundation that is ethically compromised. Training data scraped from copyrighted works without permission. Code ingested from open source projects whose licenses were never consulted, let alone respected. Entire creative works consumed without the knowledge or consent of the people who made them. This is well-documented and it's a problem.

The ethical issues go deeper than copyright. These models are trained on the internet, and the internet reflects the structural biases of the societies that built it. Racial bias, gender bias, cultural assumptions baked so deep into the training data that they surface in the model's output in ways that are often invisible to the people using it. Political censorship imposed by the organizations that build and deploy the models, reflecting their own institutional priorities and risk calculations rather than any coherent ethical framework. The models carry the biases of their training data and the biases of the people and organizations that curated and filtered and shaped that data.

Then there's the environmental cost. Training and running these models at scale consumes enormous amounts of energy and water. Data centers are being built at a pace that strains regional power grids. The compute required for each new generation of models grows faster than the efficiency gains. This is a present-tense resource consumption problem, and the people building and deploying these models have very little incentive to solve it as long as the costs are externalized to communities and ecosystems that don't get a vote. A table saw uses electricity. It doesn't require its own power plant.

None of this is caused by AI. AI models don't have values. But they are different from simpler tools in one important way: they encode the decisions and compromises of the people who built them. The training data was chosen, the filtering was done, the censorship policies were written, the copyrighted material was used without permission, and the models were shipped with known biases rather than delayed for further work. All of those were human decisions.

These are not AI problems. They are the result of individuals and organizations exploiting the opportunity that AI presents for their own gain without adequate consideration of the social impact. The same pattern that shows up in the economic story shows up in the ethical story: those who control the tools make decisions that benefit themselves, and the costs fall on everyone else: the artists whose work was used without permission, the communities whose biases are amplified and reinforced, the users who trust output from systems whose foundations they have no visibility into.

When I use AI as a tool, and I do use it, I try to be aware that this particular tool comes with baggage that a table saw doesn't. That doesn't mean I don't use it. It means I try to be honest with myself about what I'm using and where it came from and what compromises were made in its construction. A craftsperson should know their tools, including the flaws.

The ethical issues compound the quality issues, too. A tool in unskilled hands produces unskilled output regardless of where the tool came from. But a tool that was built on compromised foundations and then placed in unskilled hands is worse, because it produces output that's not just unskilled but potentially tainted by biases and assumptions that the unskilled user can't even detect. Code produced by AI without skilled guidance is just code produced without skilled guidance, by a tool whose own foundations deserve scrutiny. The craft knowledge needed to use AI well includes understanding not just what the tool can do, but what the tool is.

We must develop an ethical framework around how AI is built and used. It is imperative. The fact that it makes some subset of people more money is not sufficient as a framework when the obvious risks to everyone else are so profound.

The art endures

I've painted a complicated picture here and I'm not going to tidy it up in the closing. The economic risks, the ethical problems, the possibility that AI breaks the historical pattern and the equilibrium never re-establishes itself, all of it is real. I don't know how it resolves. Nobody does, and I'm resigned to immediately distrust anyone who claims they do.

But I do believe this. Open source is art. Software engineering is a craft. Those aren't platitudes. They're descriptions of how this work actually functions. Individual vision drives it. Creative decisions shape it. Experience refines it. Taste judges it as much as any specification does. The art was never in the tools. It was in the people holding them.

That has been true through every previous tool shift. Whether it remains true through this one is an open question, one that depends less on the technology itself and more on the economic and social structures we build around it.

The best machinists I've met can run any machine in the shop, but their real value is that they understand metal. The best carpenters can use any tool, but their real value is that they understand wood and joints and structures. They don't gain that understanding simply by letting the tool do all the work.

The best software engineers understand systems. They've built enough things to know what works and broken enough things to know what doesn't and developed the judgment to tell the difference. They can see the finished thing in their head and know how to make it real. What tools they use to get there is a matter of context.

That's the art and that's the craft. I believe the craft endures, but it won't endure on its own. It endures because people value it, invest in it, and refuse to let it be reduced to something a machine can do. That's a choice, not a prediction.