The Effects of Not Understanding

At one time, everyone had to understand how to hunt, gather, cook, barter, clean, and build. Everyone knew a little bit about something, and their specialization was mostly a matter of skill.

And, for most of written history, skill has defined one’s capacity for mastery. Operating a crane rapidly, for example, is defined by one’s ability to operate controls well.

But, as technology has become even more advanced into the Information Age, skill isn’t nearly as critical as knowledge. Awareness of how to operate the thing becomes more important than the actual ability to operate it.

However, the specialization becomes more granular. Eventually, knowledge of a thing gives way to knowledge of where something can be researched. It’s nearly impossible to track all the new features and developments that go behind your refrigerator, furnace, or computer without spending all day reading about it, so the object is intentionally designed to hide its inner workings to make it easier to operate.

Thus, while most modern objects’ elements become a world of discovery unto themselves to learn, even knowing about the object’s internals isn’t as relevant as simply the skill in its operation (e.g., the microwave’s wavelength versus its buttons).

As a case study, look at the standard consumer automotive:

  • The elaborate mechanisms of valve timings, crankshaft rotation, coolant pumping, gear shifting, battery maintenance, and many other systems require at least some mechanical engineering knowledge to fully respect.
  • Further, making it without a factory requires experience in metalwork.
  • Simply replacing parts needs a certain aptitude with space management and engineering jargon.
  • But, it’s easy enough that even children can operate them: shift the gear to drive/reverse, accelerate, brake, turn wheels.

Abstracted

Not understanding, therefore, is a type of luxury built upon trust we’ve placed in designers instead of needing to be an expert on a subject. The creator put the messy parts away under a hood or case, and you’re free to operate the thing with (almost) no fear of it breaking. The user doesn’t have to spend months learning its inner workings, and must simply understand the “abstraction” to operate it for its more limited scope of “implementation”. Further, good design will allow operating [Object] A to translate into general familiarity with [Object] B and [Object] C.

Computers are many, many layers of these abstracted elements:

  1. Starting with things that build logic (historically, vacuum tubes and transistors), engineers designed forms of advanced logic.
  2. Other engineers converted the logic into math and symbolic representations of language and light, represented as assembly code.
  3. By using more elaborate logic, programmers could build the 1-for-1 assembly code into a high-level language.
  4. Programmers then used high-level language to build algorithms and could assemble structured data.
  5. From there, designers could create user interfaces that allowed users to effortlessly and aesthetically work with those computers.
  6. Further layers of abstraction allow immersive graphical experiences, degrees of artificial “intelligence”, and expanded peripherals (e.g., autonomous vehicles, VR).

This is mostly elegant, since everyone can ignore the upstream and downstream elements of their craft:

This liberty comes through the recursive nature of computer data versus computer code. All code is data, and all data can be manipulated by code. Therefore, the object which “makes” isn’t entirely separate from the object being “made”: it’s merely a matter of perspective.

Niche

Like all other engineered things, this specialized arrangement creates general ignorance toward other domains. While ignorance can be bad in general, it yields one specific downside worth noting.

First, consider how the tech world treats each specialized discipline. They all contain particular niche understanding of a subject:

  • [Input] ⇒ [Trade-Specific Knowledge] ⇒ [Output]
  • e.g., Electrical Engineering ⇒ Logic Gates ⇒ Assembly Code

Placed together, there’s a specific flow of inputs and outputs:

  • [Primitives] ⇒ [KnowledgeA] ⇒ [OutA/InB] ⇒ [KnowledgeB] ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ etc.

This specialization creates a general void of knowledge beyond that domain (e.g., regarding Discipline C):

  • ? ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ ?

For aspects as complex as computers, learning broad concepts in other domains won’t fix the gap in knowledge:

  • ? ⇒ [OutA/InB] ⇒ ? ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ ?

The knowledge gap means we’ll often lose many critical core skills at the cost of convenience. Everyone can focus on what they feel like, but will risk being an utter technical idiot on what they don’t. The more advanced the computer system gets, the more learned ignorance.

This really isn’t officially a “problem”, until something breaks.

Risks

Technology makes us much more powerful at accomplishing our purposes, but also much more fragile. To be 100% certain you could fix when your computer crashes, you would need a broad understanding of:

Thus, things can break more easily, and we frequently won’t know how to fix them, and “edge cases” are harder to reconcile without understanding how the technology works. Why work with an issue that only causes 2% of the issues when you can simply reboot it every week, reinstall the entire system every 3 months, rip code off an open-source project, or just use an API?

Industry specialists without true understanding of what they don’t know will typically make decisions that misinterpret the hard limits of the technology. It’s difficult to prove it to them, but they’re typically listening to their imagination about what they know more than factual understanding.

Since our domains of understanding lean more into imagination than facts, most tech enthusiasts are also a victim of the industry’s trends. The trends of the early 2020s had the idea that AI can replace humanity, but someday it’ll become genetic engineering creating immortality or quantum computing answering all of life’s problems. It’s a fun thought, but “doing” it isn’t always part of the discussion.

Asking an expert for details isn’t always easy. As a general rule, there’s a soft tradeoff between competence and communication skill, so many of the highest-skilled people are also somewhat poor at expressing the fullness of their thoughts. And, adding communication-competent people almost guarantees a variation of Dürer’s Rhinoceros to the decision-making managers.

Risk Vector 1: Decay

When programmed correctly, an abstraction can sometimes work indefinitely. You put in the information, the computer spits out an answer, and all is well.

What many technologists don’t see, however, is that their entire in-computer world floats on a shaky foundation, maintained by endless “error-correcting code” that fights the natural entropic chaos of the physical world. On a smaller scale, developers and designers do a yeoman’s effort of hiding the cracks, but at scale the Unknown becomes very difficult to ignore.

This is a well-veiled risk, hidden by layers of complexity. A restarted process on one level becomes an error message on the next, which becomes a slowdown to the end user.

In all this delegation, the entire world of computers is maintained by several classes of people:

  1. A relatively small group of passionate engineers who design and update free or nearly free stuff for everyone else to use.
  2. A small army of relatively uneducated professionals hired by gargantuan companies who maintain everything.

The problem is that Class 1 is frequently doing the work of a hacker (which is an inherently artistic endeavor) and Class 2 is merely magnifying Class 1’s work without much understanding, scalability issues and all.

This arrangement creates a type of “generational technical debt“. Each new worker who picks up a project must re-learn and clean up the long-term constraints of their predecessors, but without the tribal knowledge from being around the original creator. Most deployment and maintenance requires some web searches and running a few commands, so aptitude and hands-on experience becomes exceedingly rare.

Risk Vector 2: Religion

At its farthest, technologists who don’t understand the nuts and bolts of their tools become religious idealists. Many of them believe a machine learning algorithm can predict the stock market, an immersive VR experience can completely replace reality, and that enough social engineering can render formalized laws obsolete.

Within the domain of computers, this abstraction-minded idealism isn’t too difficult to understand how someone could fall into this fallacy. Computers operate on semi-parallel concepts to reality: permission management is applied privacy, data is applied truth, computer processes imitate real-life habits, networks expand the same way as social networks. Abstractedly, they’re the same.

The primary constraint of computers, though, is that they’re strictly logic-based, with a straightforward purpose indicated by programming. Most STEM hate to admit this (outside the psychology world), but people by nature are feelings-based and meaning-based, with purpose being the effect of that. The phenomenology of a computer is effectively of zero understanding combined with extreme obedience (i.e., a perfect technical idiot).

Risk Vector 3: Exploitation

One of the more sinister abuses of non-understanding comes through exploitation by large organizations. Its relationship with open-source follows a relatively predictable pattern:

  1. A solo developer (or small team of passionate hobbyists) builds a simple, widely useful tool, then publishes their code base online.
  2. A large organization adopts/adapts that open-source repository for their purposes. This is often well-received by the relatively small former developers because they’re now famous and will often acquire wealth from that increased pedigree.
  3. As the organization continues using the code, they add extra complexities to the code, which invariably merge into the main branch over time (since they’re accommodating for all the edge cases).
  4. After enough months and years, the code is vastly complicated, creating barriers to entry for other developers outside the corporation to work on it. This will frequently include closed-off modules within the organization necessary to make the code function.

Thus, the software is officially closed-off due to its complexity (and frequently, the proprietary code it references). Either the code gets cleaned up by a few passionate developers inside the organization, or it decays into ineffectiveness, waiting for another tech innovator to (mostly) reinvent the wheel again.

Risk Vector 4: Impostor Syndrome

On an individual level, not understanding technology can be depressing. All the curiosity and childlike wonder of ever-increasingly wanting to learn comes up against the deluge of information. Most of that information moves around enough that it’s only a relevant trend for the next 0.5-20 years.

Most tech industry specialists who aren’t conceited or morons will consistently feel inadequate and ignorant.

The problem is that younger people walking into the industry are setting their goalposts wrong. Someone can devote an entire career simply to working with Python or COBOL, but never know how to build their own computer, and they will etg paid fantastically well.

The Solution to Not Understanding

Often, just by asking questions from an expert in a different domain, you learn the tribal knowledge about what they do. But, it’s difficult because it requires humility that you don’t understand, a childlike sense of curiosity, and the availability of that expert.

This is a much easier thing to overcome than most other specializations, though. Smarter people generally have a harder time admitting when they’re wrong, and most of the tech industry requires being at least somewhat smart, so typically the least experienced (and many times the youngest) tech workers tend to have an above-average openness to learn new things.

And, the best use of a technology enthusiast’s time is to understand primitives (along with reality’s primitives), but most of the rest is purpose-based and volatile.

Additional Reading

Things that used to be hard and are now easy