Not Understanding Is Good

At one time, everyone had to understand how to hunt, gather, cook, barter, clean, and build. Everyone knew a little bit about something, and their specialization was mostly a matter of skill.

With higher technology, skill isn’t nearly as critical as knowledge. It’s nearly impossible to track all the new features and developments that go behind your refrigerator, furnace, or computer without spending all day reading about it. To make it easier to operate, the object is intentionally designed to hide its inner workings, and the elements of each object become a world of discovery unto themselves.

A standard consumer automotive is an excellent example. The elaborate mechanisms of valve timings, crankshaft rotation, coolant pumping, gear shifting, battery maintenance, and many other systems require at least some mechanical engineering knowledge to fully respect, experience in metalwork to make without a factory, and a certain aptitude with space management and engineering jargon to simply replace parts. However, it’s easy enough that even children can easily operate automotives: shift the gear to drive/reverse, accelerate, brake, turn wheels.

Abstracted

Not understanding, therefore, is a type of luxury that imparts trust on others instead of requiring complete expertise of a subject. The creator put the messy parts away under a hood/case, and you can simply operate the thing without fear of it breaking (usually). The user doesn’t have to know its inner workings without having to spend months learning, and simply must understand the “abstraction” to operate it, but without much “implementation”. Further, good design will mean understanding how to operate [Object] A means the user is generally familiar with [Object] B and [Object] C.

Computers are many, many layers of these abstracted elements:

  1. Starting with things that build logic (historically, vacuum tubes and transistors), engineers designed forms of advanced logic.
  2. Other engineers converted the logic into math and symbolic representations of language and light, represented as assembly code.
  3. By using more elaborate logic, programmers could build the 1-for-1 assembly code into high-level language.
  4. Programmers then used high-level language to build algorithms and were able to assemble structured data.
  5. From there, designers could create user interfaces that allowed users to effortlessly and aesthetically work with those computers.
  6. Further layers of abstraction allow immersive graphical experiences, degrees of artificial intelligence, and expanded peripherals (e.g., autonomous vehicles, VR).

This is mostly elegant, since everyone can ignore the upstream and downstream elements of their craft:

This liberty comes through the recursive nature of computer data versus computer code. All code is data, and all data can be manipulated by code. Therefore, the object which “makes” isn’t entirely separate from the object being “made”: it’s merely a matter of perspective.

Niche

Like all other engineered things, this specialized arrangement that provides general ignorance toward other domains has one specific downside.

First, consider how the tech world treats each specialized discipline. They all contain particular niche understanding of a subject:

  • [Input] -> [Trade-Specific Knowledge] -> [Output]
  • e.g., Electrical Engineering -> Logic Gates -> Assembly Code

Placed together, there’s a specific flow of inputs and outputs:

  • [Primitives] -> [KnowledgeA] -> [OutA/InB] -> [KnowledgeB] -> [OutB/InC] -> [KnowledgeC] -> [OutC/InD] -> etc.

This specialization creates a general void of knowledge beyond that domain (e.g., regarding Discipline C):

  • ? -> [OutB/InC] -> [KnowledgeC] -> [OutC/InD] -> ?

For aspects as complex as computers, learning broad concepts in other domains won’t fix the gap in knowledge:

  • ? -> [OutA/InB] -> ? -> [OutB/InC] -> [KnowledgeC] -> [OutC/InD] -> ?

The knowledge gap means we’ll often lose many critical core skills at the cost of convenience. Everyone can focus on what they feel like, but will be an utter technical idiot on what they don’t. The more advanced the computer system gets, the more learned ignorance.

This really isn’t officially a “problem”, until something breaks.

Risks

Technology makes us much more powerful at accomplishing our purposes, but also much more fragile. When your computer crashes, to be 100% certain you could fix something about it yourself, you would need to broadly know electrical engineering, network protocols, circuit design, assembly code, the intimate arcane understanding of that specific OS, the syntax of whatever language the software that crashed uses, and at least some familiarity with graphics. Adding value to it beyond fixing it is another domain altogether.

Thus, things can break more easily, and we often won’t know how to fix them, and “edge cases” are harder to reconcile without understanding how the technology works. Why work with an issue that only causes 2% of the issues when you can simply reinstall the entire system every 3 months, rip code off an open-source project, or just use an API?

Industry specialists without true understanding of what they don’t know will typically make decisions that misinterpret the hard limits of the technology. It’s difficult to prove it to them, but they’re typically listening to their imagination about what they know more than factual understanding.

Since it’s the domain of imagination and not facts, they’re also more a victim of the tech industry’s trends. Presently, in 2023, it’s the idea that AI can replace humanity, but someday it’ll become genetic engineering creating immortality or quantum computing answering all of life’s problems.

Asking an expert for details isn’t always easy. As a general rule, there’s a soft tradeoff between competence and communication skill, so many of the most highly-skilled developers are also somewhat poor at expressing their thoughts. Adding communication-competent people almost guarantees a variation of Dürer’s Rhinoceros to the decision-making managers.

Risk Maturity: Decay

Often, an abstraction simply works indefinitely. You put in the information, the computer spits out an answer, and all is well.

What many technologists don’t see, however, is that their entire in-computer world floats on a shaky foundation, maintained by endless “error-correcting code” that fights the natural entropy of the physical world as much as everything else. On a smaller scale developers and designers do a yeoman’s effort of hiding the cracks, but at scale the Unknown becomes very difficult to ignore.

The entire world of computers is maintained by several classes of people:

  1. A relatively small group of passionate engineers who design and update free or nearly-free stuff for everyone else to use.
  2. A small army of relatively uneducated professionals hired by gargantuan companies who maintain everything.

The problem is that Class 1 is often doing the work of a hacker (which is an inherently artistic endeavor) and Class 2 is merely magnifying Class 1’s work without much understanding, scalability issues and all.

In a sense, it creates a type of “generational technical debt”, where the next worker that picks up the project must re-learn and clean up the long-term constraints of their predecessors, without beginning with the intimate knowledge of how it works or the tribal knowledge from being around the original creator. Since most deploying and maintenance often requires a simple websearch and running a few commands, aptitude and hands-on experience becomes exceedingly rare.

Risk Maturity: Religion

At its farthest, technologists become religious idealists. Many of them believe a machine learning algorithm can predict the stock market, an immersive VR experience can completely replace reality, and that enough social engineering can render formalized laws obsolete.

Within the domain of computers, this abstraction-minded idealism isn’t too hard to understand (even though it’s wrong). Computers operate on semi-parallel concepts to reality: permission management is applied privacy, data is applied truth, computer processes imitate real-life tasks, networks expand the same way. Abstractedly, they’re the same.

The primary constraint of computers is that they’re strictly logic-based, with a straightforward purpose indicated by its programming. Most STEM hates to admit this (outside of the psychology world), but people are feelings-based and meaning-based. The phenomenology of our relationship with computers, therefore, is of extreme obedience, but zero understanding.

Risk Maturity: Exploitation

One of the more sinister abuses of non-understanding comes through exploitation by large organizations. Its relationship with open-source follows a relatively predictable pattern:

  1. A solo developer (or small team of passionate hobbyists) builds a simple, widely useful tool, then publish their code base online.
  2. A large organization adopts/adapts that open-source repository for their purposes. This is often well-received by the relatively small former developers because they’re now famous and often acquired wealth from that increased pedigree.
  3. As the organization continues using the code, they add extra complexities to the code, which invariably merge into the main branch over time (since they’re accommodating for all the edge cases).
  4. After enough months and years, the code is vastly complicated, creating barriers to entry for other developers outside the corporation to work on it. This will often include closed-off modules within the organization necessary to make the code function.

Thus, the software is officially closed-off due to its complexity (and often, proprietary code it references). Either the code gets cleaned up by a few passionate developers inside the organization, or it decays into ineffectiveness, waiting for another tech innovator to (mostly) reinvent the wheel again.

Risk Maturity: Impostor Syndrome

On an individual level, not understanding technology can be depressing. All the curiosity and childlike wonder of ever-increasingly wanting to learn comes up against the deluge of information. Most of that information moves around enough that it’s only a relevant trend for the next 0.5-20 years.

Most tech industry specialists who aren’t conceited or morons will consistently feel inadequate and ignorant.

The problem is that younger people walking into the industry are setting their goalposts wrong. Someone can devote an entire career simply to working with Python or COBOL, but never know how to build their own computer, and get paid fantastically well.

Solution

Often, just by asking questions from that expert, you learn the tribal knowledge about what they do. But, it’s difficult because it requires both humility that you don’t understand, and a childlike sense of curiosity. Smarter people generally have a harder time admitting they’re wrong, and most of the tech industry requires being at least somewhat smart, so typically the least experienced (and often the youngest) workers tend to have the most openness to learn new things.

This lack of understanding isn’t really fixable, but can be mitigated. The best use of a technology enthusiast’s time is to understand primitives (along with reality’s primitives), but most of the rest is use-based and volatile.