At one time, everyone had to understand how to hunt, gather, cook, barter, clean, and build. Everyone knew a little bit about something, and their specialization was mostly a matter of skill.
With higher technology, skill isn’t nearly as critical as knowledge. It’s nearly impossible to track all the new features and developments that go behind your refrigerator, furnace, or computer without spending all day reading about it. To make it easier to operate, the object is intentionally designed to hide its inner workings, and the elements of each object become a world of discovery unto themselves.
A standard consumer automotive is an excellent example. The elaborate mechanisms of valve timings, crankshaft rotation, coolant pumping, gear shifting, battery maintenance, and many other systems require at least some mechanical engineering knowledge to fully respect. Further, making it without a factory requires experience in metalwork, and simply replacing parts needs a certain aptitude with space management and engineering jargon. However, it’s easy enough that even children can easily operate them: shift the gear to drive/reverse, accelerate, brake, turn wheels.
Abstracted
Not understanding, therefore, is a type of luxury that imparts trust on others instead of requiring complete expertise of a subject. The creator put the messy parts away under a hood/case, and you can simply operate the thing without fear of it breaking (usually). The user doesn’t have to spend months learning its inner workings, and must simply understand the “abstraction” to operate it with a limited scope of “implementation”. Further, good design will translate operating [Object] A into general familiarity with [Object] B and [Object] C.
Computers are many, many layers of these abstracted elements:
- Starting with things that build logic (historically, vacuum tubes and transistors), engineers designed forms of advanced logic.
- Other engineers converted the logic into math and symbolic representations of language and light, represented as assembly code.
- By using more elaborate logic, programmers could build the 1-for-1 assembly code into a high-level language.
- Programmers then used high-level language to build algorithms and could assemble structured data.
- From there, designers could create user interfaces that allowed users to effortlessly and aesthetically work with those computers.
- Further layers of abstraction allow immersive graphical experiences, degrees of artificial “intelligence”, and expanded peripherals (e.g., autonomous vehicles, VR).
This is mostly elegant, since everyone can ignore the upstream and downstream elements of their craft:
- Someone can work on web design, but doesn’t have to know much about memory management or hardware.
- A hardware designer doesn’t need to understand operating systems.
- Graphic design doesn’t require understanding mathematical calculations that frame the human experience they aim to achieve.
This liberty comes through the recursive nature of computer data versus computer code. All code is data, and all data can be manipulated by code. Therefore, the object which “makes” isn’t entirely separate from the object being “made”: it’s merely a matter of perspective.
Niche
Like all other engineered things, this specialized arrangement creates general ignorance toward other domains. While ignorance can be bad in general, it yields one specific downside worth noting.
First, consider how the tech world treats each specialized discipline. They all contain particular niche understanding of a subject:
- [Input] ⇒ [Trade-Specific Knowledge] ⇒ [Output]
- e.g., Electrical Engineering ⇒ Logic Gates ⇒ Assembly Code
Placed together, there’s a specific flow of inputs and outputs:
- [Primitives] ⇒ [KnowledgeA] ⇒ [OutA/InB] ⇒ [KnowledgeB] ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ etc.
This specialization creates a general void of knowledge beyond that domain (e.g., regarding Discipline C):
- ? ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ ?
For aspects as complex as computers, learning broad concepts in other domains won’t fix the gap in knowledge:
- ? ⇒ [OutA/InB] ⇒ ? ⇒ [OutB/InC] ⇒ [KnowledgeC] ⇒ [OutC/InD] ⇒ ?
The knowledge gap means we’ll often lose many critical core skills at the cost of convenience. Everyone can focus on what they feel like, but will be an utter technical idiot on what they don’t. The more advanced the computer system gets, the more learned ignorance.
This really isn’t officially a “problem”, until something breaks.
Risks
Technology makes us much more powerful at accomplishing our purposes, but also much more fragile. To be 100% certain you could fix when your computer crashes, you would need a broad understanding of:
- Electrical engineering
- Network protocols
- Circuit design
- Assembly code
- The intimate arcane understanding of that specific OS
- The syntax of whatever language the software that crashed uses
- At least some familiarity with graphics
- Plus, adding any value beyond fixing it is will likely include further domains
Thus, things can break more easily, and we frequently won’t know how to fix them, and “edge cases” are harder to reconcile without understanding how the technology works. Why work with an issue that only causes 2% of the issues when you can simply reboot it every week, reinstall the entire system every 3 months, rip code off an open-source project, or just use an API?
Industry specialists without true understanding of what they don’t know will typically make decisions that misinterpret the hard limits of the technology. It’s difficult to prove it to them, but they’re typically listening to their imagination about what they know more than factual understanding.
Since our domains of understanding lean more into imagination than facts, most tech enthusiasts are also a victim of the industry’s trends. 2023’s trends had the idea that AI can replace humanity, but someday it’ll become genetic engineering creating immortality or quantum computing answering all of life’s problems. It’s a fun thought, but “doing” it isn’t always part of the discussion.
Asking an expert for details isn’t always easy. As a general rule, there’s a soft tradeoff between competence and communication skill, so many of the highest-skilled people are also somewhat poor at expressing the fullness of their thoughts. And, adding communication-competent people almost guarantees a variation of Dürer’s Rhinoceros to the decision-making managers.
Risk Maturity: Decay
Often, an abstraction simply works indefinitely. You put in the information, the computer spits out an answer, and all is well.
What many technologists don’t see, however, is that their entire in-computer world floats on a shaky foundation, maintained by endless “error-correcting code” that fights the natural entropic chaos of the physical world. On a smaller scale developers and designers do a yeoman’s effort of hiding the cracks, but at scale the Unknown becomes very difficult to ignore.
In all this delegation, the entire world of computers is maintained by several classes of people:
- A relatively small group of passionate engineers who design and update free or nearly free stuff for everyone else to use.
- A small army of relatively uneducated professionals hired by gargantuan companies who maintain everything.
The problem is that Class 1 is frequently doing the work of a hacker (which is an inherently artistic endeavor) and Class 2 is merely magnifying Class 1’s work without much understanding, scalability issues and all.
This arrangement creates a type of “generational technical debt”. Each new worker who picks up a project must re-learn and clean up the long-term constraints of their predecessors, but without the tribal knowledge from being around the original creator. Most deployment and maintenance requires some web searches and running a few commands, so aptitude and hands-on experience becomes exceedingly rare.
Risk Maturity: Religion
At its farthest, technologists become religious idealists. Many of them believe a machine learning algorithm can predict the stock market, an immersive VR experience can completely replace reality, and that enough social engineering can render formalized laws obsolete.
Within the domain of computers, this abstraction-minded idealism isn’t too difficult to understand (even though it’s wrong). Computers operate on semi-parallel concepts to reality: permission management is applied privacy, data is applied truth, computer processes imitate real-life tasks, networks expand the same way as social networks. Abstractedly, they’re the same.
The primary constraint of computers is that they’re strictly logic-based, with a straightforward purpose indicated by programming. Most STEM hate to admit this (outside the psychology world), but people are feelings-based and meaning-based. The phenomenology of a computer in this arrangement is effectively of zero understanding combined with extreme obedience (i.e., a perfect technical idiot).
Risk Maturity: Exploitation
One of the more sinister abuses of non-understanding comes through exploitation by large organizations. Its relationship with open-source follows a relatively predictable pattern:
- A solo developer (or small team of passionate hobbyists) builds a simple, widely useful tool, then publishes their code base online.
- A large organization adopts/adapts that open-source repository for their purposes. This is often well-received by the relatively small former developers because they’re now famous and will often acquire wealth from that increased pedigree.
- As the organization continues using the code, they add extra complexities to the code, which invariably merge into the main branch over time (since they’re accommodating for all the edge cases).
- After enough months and years, the code is vastly complicated, creating barriers to entry for other developers outside the corporation to work on it. This will frequently include closed-off modules within the organization necessary to make the code function.
Thus, the software is officially closed-off due to its complexity (and frequently, the proprietary code it references). Either the code gets cleaned up by a few passionate developers inside the organization, or it decays into ineffectiveness, waiting for another tech innovator to (mostly) reinvent the wheel again.
Risk Maturity: Impostor Syndrome
On an individual level, not understanding technology can be depressing. All the curiosity and childlike wonder of ever-increasingly wanting to learn comes up against the deluge of information. Most of that information moves around enough that it’s only a relevant trend for the next 0.5-20 years.
Most tech industry specialists who aren’t conceited or morons will consistently feel inadequate and ignorant.
The problem is that younger people walking into the industry are setting their goalposts wrong. Someone can devote an entire career simply to working with Python or COBOL, but never know how to build their own computer, and get paid fantastically well.
Solution
Often, just by asking questions from that expert, you learn the tribal knowledge about what they do. But, it’s difficult because it requires both humility that you don’t understand, and a childlike sense of curiosity. Smarter people generally have a harder time admitting when they’re wrong, and most of the tech industry requires being at least somewhat smart, so typically the least experienced (and many times the youngest) workers tend to have the most openness to learn new things.
This lack of understanding isn’t really fixable, but can be mitigated. The best use of a technology enthusiast’s time is to understand primitives (along with reality’s primitives), but most of the rest is use-based and volatile.