I don't know for sure, but it seems every software-producing organization has to relearn once in generation Fred Brooks' fundamental insight, written down more than half a century ago: Adding man-power to a late project makes it later. Delivering a baby takes 9 month, no matter how many mothers are on the payroll. Happy to lend you my copy of the mythical man-month, so you can read twice as fast. It is the iron law of software engineering.
I am living and breathing software since I was 9 years old and got my hands on the keyboard of a machine with a BASIC interpreter. That is three decades now. I shall be damned if I allow anyone to ruin it for me.
Here's a neat little idea for an git alias that I've picked up in a talk by Andreas and Sebastian at unKonf.
In the light of the recent attacks on the npm package ecosystem, here's a bunch of functionality that the in recent node versions can be done or used without installing a dependecy.
I'm not saying that is enough to support a big application, but the out-of-the-box experience compared to a few years ago, where this list would represent at least a dozen (dev-)dependencies and likely install a three digit number of packages, is quite decent. You can get a pretty far with that for a CLI tool or to prototype a REST-API. No need to sweat over npm audit.
In some circles of programmers, the field of software architecture has a bad reputation.
To some extent maybe deservedly so, likely caused by overgeneralizing from encountering a certain type of software architect at work.
Architectural qualities of systems in my experience and opinion cannot be influenced, effectively at least, by command and control, or idealized yet ultimately meaningless diagrams, or sending out lengthy documents, in which one is pontificating about how things ought to look like from 10000 meters above the ground.
I believe in the superiority of practice. There is no substitution for spending a significant part of the work week in the weeds, on systems which are actually in use by other people.
Fewer opinions, but better informed ones trump all those write-only documents in the world.
In system theory there is the the principle of suboptimization, which states that Optimizing each subsystem independently will not in general lead to a system optimum, or more strongly, improvement of a particular subsystem may actually worsen the overall system..
In software development a certain way to divide labor in an organization leads to a similar problem. When there are specialists for cross-cutting concerns (sometimes also known as quality attributes), which have no end-to-end responsibilities for a particular product, yet a mandate to coerce those who do, to work on their particular topic, it can bring a whole project down. The reason is that the folks who have both the skills and interest to steer a system end-to-end, will become trapped in a permanent defensive position, dragged down by endless bikeshedding over details of varying significance. And over time it is nearly certain that they will run out of fucks to give. When that point is reached their choice is to either resignate or leave. Software producing organizations ought to be aware of that.
As much as I sing the praises of choosing boring technology, occasionally even the most boring technologies have their interesting moments. And by that I mean interesting as in the old curse: may you live in interesting times...
For something as mundane as the Java/Jakarta EE equivalent of JSON.parse (or simply writing an object literal in Javascript), there is a design desicion that results in a completely preventable performance nightmare: When you use Json.createObjectBuilder(), or Json.createArrayBuilder(), each call will do a scan of the class path to look up the implementation of the JSONP interface. This is required to be compliant with the Jakarta EE spec. Which in theory enable to swap the implementation of JSONP on the application server at runtime, but in practice you never do that, and more important: the naive usage of the API comes at the huge cost with regard to performance. In the still unresolved issue that was brought up six years ago somebody reported a factor of 7200 as compared to the relativly simple solution: Always assign the JSONP provider to a static final field (maybe in a Utility class) and always use that instance to call createObjectBuilder().
I'm in in the process of refactoring two command-line tools, and I'm wondering if it is somehow possible to quantify when you enter the territory of diminishing returns. Both tools are essentially working well, the refactoring is for the sake of the maintainer.
In one case (at work) that will be someone who is not me. This codebase follows a rather old-school ES5-style, and contains a few idioms that a whole generation of programmers has not been exposed to, so first order of business is to bring it into a more palatable form. After that, the question is: improve the typeing, improve the test suite, write some more documentation or call it a day and schedule the handover? Time is money, and I cannot spend too much on something that basically already does the job, so what has the most utility per hour invested? No conclusion yet.
In the other case the maitainer is and will remain I myself, because the other tool is my SSG. I only touch every once in a while. So now the choice to write plain JS, which was done to avoid dealing too much with toolchain setup (albeit using every nicety that was baseline in 2023) is catching up with me when I occasionally want to add/modify a plugin. As node 23 can execute TypeScript (or at least a huge subset of it) directly, I still get away without tooling and add types to help taking load from my memory.
Being easy on the maintainers memory/mental load seems to be the overarching theme here and the trait that I might want to optimize for in future projects way earlier than I used to do.
I expect some of the software I work on to be in operation for decades, so one question I ponder regularly is how to make its design viable for this assumed lifespan. Experience teaches it's certain that some of the third-party dependencies will breaking changes and even their end-of-life. Full rewrites are rarely feasible, let alone economically viable, so many application languish for a long time as long as they still generate revenue (a branch of my employer just a recently, without any trace of irony, advertised a position for a VB 6 developer). So how not to get trapped by that?
In the implementation context I state the problem as how to factor code in a way that does not dependent of any framework or external library. Beyond my day-to-day issues, in a more principled and generalized way of phrasing it, I think the UI layer needs, but currently lacks, a conceptual equivalent to what an object-relational mapper is for the application layer: a convenient level of abstraction that would enable to swap out the underlying implemention with relative ease, just that instead of the RDBMS it would be the UI toolkit/component library that were to be changed in (mostly) transparent way. I think a great step into that direction would demonstrate the feasibility for one major framework and the Top 10 of its component libraries, but the endgame of such a abstraction layer could also include the framework level.
An alternative and simpler solution would be a self-constrained approach that would deliberately limit itself to use only platform primitives. Although the proposition starts to look more plausible and viable from year to year as the web platform improves (for example customizable selects just very recently landed in Chrome), it would be orders of magnitude harder to advocate for in a corporate environment. It takes would likely take a bit more time and skill to satisfy the whims of your run-of-the-mill Figma jockeys and the exorbitant expectations they tend to raise in other non-technical stakeholders.
Lars-Christian writes in Cool apps are interoperable apps that there are options available for those who are willing to search for them and, when necessary, also self-host. I cannot help to notice, that this type of interoperability is based on the most primitive possible integration style (as defined in the seminal Enterprise Integration Patterns by Hohpe/Woolf): File Transfer.
Tangetially related (see also earlier note): end-user runtime composition and scripting approaches are largely untackled, doubly so in the locked-down consumer devices of today. Via Samir Talwar I found Jeanine Adkissons work on Pipe-Based Programming, which is proposed as one approach to adress what they call the Desktop-scripting problem: How should unrelated programs written in different languages be integrated—especially in an ad-hoc manner in a desktop environment?. But I've yet to find a convincing implementation for a desktop environment I'd care to use. For web application I wonder how far one could go to abuse a end-to-end testing framework like Playwright for some macro-level programming spanning multiple unrelated sites. Certainly still pretty far away from an end-user, even a power user.
Carl Svensson has a point when he states that Computers were more fun when they weren't for everyone. Sadly, the old unix adage is true, what is designed to stop its users from doing stupid things, also stops them from doing clever things.
Barry O'Reilly has a fantastic piece of professional advice for budding software architects: It's meant to be chaos - enjoy it!. The whole interview on the "Dear Architects" newsletter is worth a read.
In git command line options match already if they are unambigous with regard to the prefix. So instead of passing --amend you can write:
git commit --amen
If you've amended a commit, it of course gets a new hash. So if it was already pushed to a remote, you indeed better pray that nobody else has committed and pushed in the meantime, because you now need to push the amended commit with -f. Or, to add some extra sillyness, you can create an alias for the push command, like
git config --global alias.punish push
So you can finally:
git punish --force
Arguably, when you can and need to force push to the trunk/main/master branch, someone being punished forcefully might be quite adequate...
I am a sceptic when it comes to professional programming supported by LLMs. I think the cognitive limits of the human in the loop will turn out to become the upper cap of the output in a professional context. And I think it will turn out to be wasteful to burn the mental cycles of experienced developers with reviewing LLM slop.
But there might be an economic tangent to it, that could turn out dangerous for unapologetic fossils(like yours truly): a computing analogon to Gresham's law, if you will. The latter states that "bad money drives out good", and so it is conceivable that sloppy-pasta apps will be produced and deployed in a volume and frequency where the potential errors and exploit scenarios are just priced into the total cost of system ownership. It certainly doesn't help that the general public already has every reason to expect the software that governs its everyday life to be dysfunctional anyway...
I'm occasionally taking part in hiring interviews, mostly as technical screener, but sometimes also for other roles that are adjacent to my line of work. What I find a bit puzzeling is that many candidates do not appear to be curious at all about what they would work on in the role they are interviewing for. And while the observation holds notably for both engineering and product positions, although in the latter cases it confuses me even more.
A reader may every once in a while encounter a book which shifts their perspective radically. For me, Barry O'Reillys Residues: Time, Change, and Uncertainty in Software Architecture is such a book. A very slim volume, dense with ideas and a pretty well-presented case, that some of the pillars on which my profession rests, are irrevocably flawed. Not sure what to make of it yet, except, that I now need to read at least his papers that preceeded the book as well.
Even though the web might still be one of the best shots we've had yet at creating a dynamic medium, it is stupendously complicated to showcase the dynamic behavior of a process or algorithm. You can show a source listing, but stepping through something, suspending execution, inspecting state, let alone modifying code at runtime is hard and needs tons of bespoke manual code/tooling around it (which in itself might then also not even have the property of being modifiable, inspectable etc..). A Smalltalk or Lisp-like environment could help with some of the issues, but likely at the cost of ease of distributability and probably broader understandability. More than sixty years have passed since Douglas Engelbart wanted to augment the human intellect with computers, and metaphorically speaking, we are still writing with pencils that have a brick attached to them.
By the way, one of the ways that I characterized the Dynabook years ago, was: "An instrument whose music is ideas"
Nearly two years ago Large Language Models were all the rage. To some extent they still are. Anyway, I wanted to see what the hype was about and did a litte experiment with ChatGTP. I tried to use it for a coding exercise. The outcome was a mixed bag, I wrote an article about it, and have basically ignored LLMs in my professional life ever since. What shall I say: my job has not been automated away just yet.
On the other hand though, I keep hearing about productivity gains from colleagues, I also see that the capabilities of the various models enable people who are not professional programmers to build useful things that they would have perceived as out of their reach before.
I remain sceptical, foremost because the necessary presence of hallucinations implies that the output cannot - at least in good conscience - be used in any (business-)critical system without thorough critical review. And human review places an upper cap on productivity gains. An empirical case study from 2006 on the effectiveness of lightweight code reviews reports that review rates faster than 500 SLOC per hour, review session length > 90 minutes and code size under review > 400 SLOC all make the process significantly less effective and miss critical defects.
I think Fred Brooks' famous assertion from his 1986 paper No Silver Bullet still holds true: There is no single development, in either technology or management technique, which by itself promises even one order of magnitude improvement in productivity, in reliability, in simplicity..
But, even if it is not an order-of-magnitude, if a consistent percentage improvement can be had, language models might become a tool like IDEs, autoformatters, linters and debuggers in the long run. As github just today advertised a free tier for their copilot product, it might be worth to repeat the experiment and gather a few more anecdata...
The unix philosophy of composing programs by piping streams of bytes from the standard output of one process to the standard input of the next one is well and widely understood. When it comes to having something even only vaguely similar for UIs, programs are considerably less "cooperative".
For desktop apps (which turned from being "the norm" to a niche) the points of connections are, in the best case, based on a shared / standardized / application-agnostic file format. A good example, which might even serve as a bridge to the world of the command line, is CSV, which you could think of as lowest common denominator for spreadsheets, on a higher level of complexity you have ODF for office applications. Composition in this case takes the form of working on the same file with a sequence of different applications. The much more common, much more brittle and much less sophisticated mechanism is the venerable clipboard. The copying from one process to a temporarily globally accessible, shared in-memory location to an appropriate input is then done ad hoc and manually. With a bit of abstraction it also resembles a pipe.
For web applications it seems to be even bleaker, the default browser security policies are all protections against client-side composition. And at that point I also have to note that most of them have been retrofitted into the web platform because of the abuse scenarios which their absence can enable.
But, even if you'd run a bunch of applications behind a reverse proxy on machines under your control, so that same-origin, mixed-origin and HTTP headers were but a small matter of programming, integrating/composing web apps is would be non-trivial and requires a ton of bespoke development work.
Yahoo Pipes for a few years enabled tech savvy users to build data mashups from resources and API on the open web with relative ease and comparatively little programming. The service shot down nearly a decade ago and nothing comparable has emerged since. The open web as a source of data also has has dryed up considerably over time (at least unless you are a company with an excess of capital for funding an army of scraping bots overseen by a staff of engineers).
But let's cast all that aside for a moment, we are also lacking some fundamental ideas on how macro-level composition mechanism for UIs ought to look like. Applications are generally packaged as black boxes which do not expose a model that would expose its capabilities, its inputs and outputs in a way that would make scripting (or macro-recording?) possible or even convenient.
The more I think about it, it strikes me as odd: At all times we already have a model of what the application is capable of: its source code. The explicitness of the interface is through the build process literally lost in translation. So, since we are incapable of "talking" about applications after putting them into a form that suits the operating system (and in that sense a browser is an operating system as well), I think Dan Ingalls, was onto something when he wrote: An operating system is a collection of things that don't fit into a language. There shouldn't be one.
Conducting job interviews properly is not easy, and I stand by what I said earlier about leetcode interviews. Anyway, what one usually wants to achieve is to sample enough information to see if proceeding to another round (or in the final stage extending an offer) is a risk worth taking. For this I consider a proper conversation more helpful than checkbox-style knowledge questions. One pair of questions that I've recently read: Tell me about your favorite technology, Followed by: can you tell me in which ways it sucks? Never used it myself (as of yet), but I think it could be a good way to create a fruitful dialog in a technical interview.
In the foreword of Richard Gabriels book Patterns of Software Christopher Alexander expresses a remarkable take on the level of quality he aspired to in his work.
In my life as an architect, I find that the single thing which inhibits young professionals, new students most severely, is their acceptance of standards that are too low. If I ask a student whether her design is as good as Chartres, she often smiles tolerantly at me as if to say, “Of course not, that isn’t what I am trying to do. I could never do that.”
Then, I express my disagreement, and tell her: “That standard must be our standard. If you are going to be a builder, no other standard is worthwhile. [..]”
Alexander then continues to ask if architecture is actually the correct metaphor and whether a parallel can really can be drawn between the fields, for he asserts that in his later buildings he arrived at the qualities of aliveness he was seeking out (pointing to his late opus "The nature of order"). He doesn't ask it rhetorically, and doesn't give an answer, but he notes on the field of software:
I get the impression that road seems harder to software people than maybe it did to me, that the quality software engineers might want to strive for is more elusive because the artifacts—the programs, the code—are more abstract, more intellectual, more soulless than the places we live in every day.
I've read a statement on Linkedin the other day, but as the algorithmic feed is utter garbage, I'm unable to find it in its original context again. Anyway, I very much agree with the notion and it bears repeating: leetcode-style interviews are nothing short of hazing.
Bartosz Korczyński has reimplemented huge parts of the classical Visual Basic 6 interface in C# and made it runnable in the browser. It is not a complete reimplementation, and I don't know how much of the actual language is supported. I managed to trigger a message box from the click handler of a command button, but I got an error when I tried to asign the content of an input to the caption of a label, but this recreation from scratch it is an impressive technical feat nonetheless.
It would miss the point if you'd try to rewind the clocks to 1999, but I am sure that there many lessons to be rediscovered. For example, let's take the UI toolkit:
Measured against today's standards, that is an incredibly small component library. The number of controls amount to a baker's dozen, four of them are hardly UI components in a stronger sense. Timers are an abstract concept, that they are made into a component is merely an overstrech of the metaphor, a convenient way to declare an object, maybe a bit of "golden hammer"-syndrome. The frame is visible, but actually just a tool to introduce a hierachy, so you can have multiple groups of radio buttons for example. Vertical and horizontal scrollbars would in today's toolkits on be an implementation details of a some container that happens to overflow. What are does that leave us with? Picture, Label, Text input, Command, Checkbox, Option button, Listbox, Combobox and Shape. Most of them can be mapped directly to HTML primitives that exisits since the early days of the web (granted, it took a while for SVG to become widely available, but the rest: just stardard controls). So, why on earth does every company needs to have their very own component library? Why do we consider it necessary to have a three-digit number of components at our disposal to create user interfaces to, let's face it, CRUD APIs that are persisted in relational databases? (Constrained) input, selection (single and multiple, maybe with a few constraints like: at least one or at most n), textual and graphical output, and a way to invoke commands, and we've got the 10% that cover 90% of the use cases.
Certainly, the world did not stop spinning, a bit of refinement does not hurt. But I am under the impression, that something fundamental in culture of software design and engineering was lost over course of the last decades: a focus on the essentials, paid for with excessive complexity. Likely an instance of an effect that Gregor Hohpe has described and eponymously dubbed as Gregor's Law, it is nature's punishment for organizations that are unable to make decisions.
Anyway, if somebody were to work towards a new software substrate I think this is among the most fundamental lessons to be relearned.
In the country where the red queen reigns supreme, you might encounter a dependecy that you cannot get rid of wholesale, need to update and which breaks its interface. Happens more often than anybody cares to admit.
I used to treat internal dependecies as, well, dependable. But experience taught me that a dependency which breaks once, usually will break again in the not too far away future. So to manage undependable dependecies you can create an anti-corruption layer and encapsulate all changes to the interface. As that comes at a cost and usually is only conforming to the initial upstream interface anyway, I tend to skip it the first time around, giving the dependency the benefit of the doubt. But when the interface turns out unstable it is "fool me once, shame on you, fool me twice shame on me"; make sure that the fix also ensures that adapting the next time will be a matter of editing a single module under your own control.
Software is not a product, but rather a medium for the storage of knowledge. [..]
[T]he hard part of building systems is not building them, it's knowing what to build ‐ it's in acquiring the necessary knowledge. This leads us to another observation: if software is not a product but a medium for storing knowledge, then software development is not a product-producing activity, it is a knowledge-acquiring activity.
Although I don't entirely agree with its absoluteness, it's an interesting framing of the matter nonetheless.
Jonathan Edwards delivered a keynote adress to the Workshop on Live Programming, in which he talked about Software substrates. He defined them as a conceptually unified persistent store for both code and data, with a UI for direct manipulation on that and live programming within that state. In such a substrate there would be no division between development time and runtime. Using and the system and programming it would be only different ends of a spectrum, everything happens in the same "world".
He contrasts the idea of a software substrate to what he calls "the stack", as pars pro toto for conventional software engineering, and conceides that the idea of substrates has been thoroughly beaten by the stack since the 1990es. He asserts that substrates are better suitable for a class of problems, that lie in-between with regard to complexity, for which spreadsheets are not powerful enough, but which do not yet warrant building a conventional software system around them, as that would be too expensive. He sees low-code/no-code competing on that ground as well, but says that these are attempts to "template the stack", and therefore cannot establish themselves successfully and durably.
Edwars proposes a research agenda, on Data-first software, with the goal to generalize spreadsheets and make them more powerful. The ideas and problems to be solved include: trying out UI metaphors beyond the grid, version control for the data that is end-user friendly, building in support for changes in types and schemas, treating code as meta-data, which should be kept small, inspectable and malleable, adding interoperability with The Stack via HTTP-APIs, and providing a subset of the features of conventional database management systems.
I think that is a very interesting problem space. How could software look like that empowers the user without coercing them into becoming a software engineer. I think if you want to get there you need a great number of these "in-between" problems and folks from both ends of the spectrum with a desire and willingness to collaborate on the making of that substrate, with their egos so much in check that it does not become patronizing experience for either side. I think the RAD tools of 1990es like Visual Basic and Delphi were on to something there, they were certainly erring on the side of "the stack", but one could learn a great deal from them. The WYSIWYG approach to form building, with all its warts and limits, would have enabled designers without any programming knowledge to contribute an actual part of the actual software, which is in stark contrast to today's common approach, in which designers use software to paint (very credible) images of what a software might look like, that are not executable in a meaningful sense at all, which are then thrown over to engineering in order to be recreated from scratch.
If software substrates would enable an intellectual cohabitation, if you will, of end users, domain experts, designers and engineers in the same medium, it would be a huge achievement in my eyes.
An interesting idea: Rebecca, Allen and Jordan Wirfs-Brock have authored a paper Discovering Your Software Umwelt in which they reflect how the influences that surrounded them shaped their approaches on software development. They share the questions which they used for their self-reflective narratives. I could imagine that writing such a narrative can indeed uncover interesting insights, both on an individual level as well as for the field as a whole.
On the C2 Wiki I've stumbled over a definition what patterns in software development are about: We're looking for that common knowledge that's so uncommon. — A really great idea, beautifully condensed into a pithy sentence.