Tech Trends Suck

Since Silicon Valley’s founding, novelty has reigned over computer technology, mostly because of how exponentially scalable new technology becomes. Between 1985 and 2000, for example, a standard personal computer was thousands of times faster, and was able to support many other functions like a nicer GUI and better games.

The tech industry is filled with young people. While there are statistical age outliers of older people, the industry leans toward people in their 20s, and there are a few reasons for this.

Firstly, by their very nature, young people have a few demographic features:

  • They’re often more concerned about money than necessarily a career, simply as a product of maturity, which means they’re expendable.
  • They have less life experience, so they’re easier to exploit, will typically have a more positive attitude about most things, and are more likely to be loyal to their company.
  • A diminished understanding of the world frequently makes them think they’re more right than they are, including the above-stated career and beliefs.

Second, new technologies are constantly getting developed, used, and outmoded. Trends that would normally take 10–20 years in the rest of the world will shift within the tech industry in about 2–5 years, for several reasons:

  • Because of the abstraction-based nature of computer technology, most software is fire-and-forget once it runs well, so an established technology becomes the starting point for other things. It literally moves at the speed of obsessively precise thought.
  • There’s lots of money in the media/information complex.
  • All the above-stated young people simply like novelty for novelty’s sake, and their inexperience means they’re less afraid of breaking a known-good system.

So, because of this, the industry is and will always run on cheap, naive, ambitious labor, at least until the money dries up.

Neophiliac Culture

Anything dominated by young people that’s validated by prior success is almost guaranteed to push heavily against proven things that work. The founding of the USA is a similar story, which proved that a constitutional republic can work (at least for a few hundred years).

The history of Silicon Valley is a standard story of human nature pushing against limits. William Shockley created the Shockley Semiconductor Laboratory in 1955. Only two years later, the 8 leading scientists there (dubbed the traitorous eight) decided to create Fairchild Semiconductor (a division of Fairchild Camera and Instrument) in Mountain View, California. Their efforts, combined with Bell Labs and a few other large players in the industry, created the Silicon Valley culture that’s still at least somewhat present in most tech culture across the world.

Shortly after their development, the philosophy of computers adopted a distinctly postmodern angle, with the idea of human-computer synthesis creating an altogether new way of approaching life. Their aspirations were partially correct, but somewhat grandiose. The ideas can get lofty, and tend to borrow heavily from science fiction:

Against most criticism, their attitude is mostly “we haven’t gotten there, but we will soon!” and most of them are die-hard advocates of the Idiot Ancestor Theory (i.e., the people who came before us were morons).

This new-is-better philosophy hasn’t really changed, either. In 2013, Mark Zuckerberg was famously quoted as saying the motto of Facebook was “move fast and break things”. While he walked it back a year later, the attitude still permeates the culture.

One of the dominant reasons for this idealism comes from the natural design of the computer versus the natural design of nature:

  • Nature is inherently messy, with obscure redundancies everywhere, hidden features, additional components, and endless permutations. We don’t understand all of it, and there’s no guarantee we ever will.
  • Computers, by design, are built upon logic, which is always perfectly ordered, often well-organized, clean, conspicuous when problems arise, and resistant to arcane changes. Someone is an expert in every part of it, so it’s the art of finding them or their documentation.
  • Most computer-based work involves fighting back against the randomness of nature with things like error-correcting code, and only specific implementations of the chaos of nature have any advantageous use (e.g., random number generators, AI). By contrast, nature is essentially chaos, with splashes of order.

Downsides

To note, I’m not saying technology itself is bad. Among other things, technology empowers us to perform things that were considered miracles a few short decades ago:

The downside of all the above-stated neophilia, though, is that proven practices and sturdy, reliable systems are often overlooked:

  • COBOL is a very fast programming language, though it has other downsides that make it unwieldy for most uses. As of 2020, around 80% of financial transactions are run on it, even though it was made in 1959.
  • RSS is a reliable, decentralized, free protocol for sending intermittently updated feeds of public information. It’s also not on the radar of many tech people because it was released in 1999.
  • There have been efforts to transform all interfaces to GUI-based, and even to VR-based, but nothing will ever fully replace the simplicity and straightforward nature of text messages and entering commands by typing.

Since the nature of new things means they’re untested, highly influential speakers can grab the industry’s attention, even when they have no credibility to back up what they’re saying or when they promise the impossible. Evidence would take too long to acquire (and they’d miss out on the trend), so they can be drawn in from either hope (in an investment) or fear (in a risk).

Some of these influencers are literally on drugs. From Atari’s famous games, onward through executives that run current tech companies, the inspiration for the Next Big Thing often comes through copious amounts of substances that disconnect our ability to think practically. Most people don’t know about it because there’s very little incentive for anyone to talk about it, for various reasons.

This “newer is better” attitude misses a key detail as well: non-tech people are more naturally resistant to change, and the trends move slower for all the normal people:

  • People were quick to adopt the keyboard and mouse. A touchscreen is a logical step forward, but people will keep using the keyboard and mouse a long time after I write this in the 2020s. When a VR revolution arrives, a flat display screen won’t disappear for a long time after that.
  • People still use printers, and most tech people imagined they’d be gone by now. The same goes for tape drives, vinyl records, and camera film.
  • “Smart” cars are unwieldy and awkward to use. One of the most egregious UX failures is the removal of control knobs in lieu of buttons on a touchscreen. The design gets worse with a proprietary, stripped-down, garbage-quality OS. To add insult to injury, there’s also often a way to downgrade your vehicle to include knobs again.

This neophilia isn’t new to computers, either. Since the 1920s, people have wrongly forecast humanity would be rendered obsolete by the rise of horseless carriages, mass production, touch-button panels, and robots.

It’s perfectly reasonable to assume that AI or self-driving cars will yield a similar result to the grand trends of yesteryear, and that the future of DNA programming will be much of the same.

Long-Term Downsides

Having a best=new attitude creates constant long-term implications. Very frequently, developers are reinventing the wheel. It’s not uncommon to hear this pattern in descriptions of new technologies on GitHub and Product Hunt:

  • “[Older Technology], but faster.”
  • “Like [Older Technology], but has [Newer Technology Feature] in it.”
  • “A simpler solution for [Problem With An Existing Known-Good Solution].”

Those technologies are typically better, but developers frequently perform tons of rework without considering the best use of their time. Most of their motivation is to become the Next Big Thing, but if they had about a decade of life experience, they’d see the statistical unlikelihood of that endeavor and plan their limited time on this planet more wisely.

The age-old axioms of “Slow is steady, steady is smooth, smooth is fast” and Chesterton’s Fence (don’t remove things without understanding why they were there) have limited cultural relevance in the tech world. As a result, the flow of potentially useless information inside tech blogs and tech guides is worse than the stock market, and things frequently break because someone didn’t think it was worth extensively testing before shipping.

The irony of this is that many technologies do have tremendous utility, but most often long after everyone stopped talking about it. For example, tape drives are still great at storing lots of information when you don’t care too much about some of it lost (e.g., farm data).

People will spend exorbitant amounts for the latest/greatest/newest/shiniest products, which will last approximately 6–12 months before they must buy more. It may be worth the money for people inside the industry, but it’s an absurdly expensive balance sheet item or hobby for everyone else.

Expectations

Obsession with novelty, along with perpetually working intimately with computers, can distort a person’s view of reality.

Computers are a unique world unto themselves:

  1. Updating computers simply requires running pre-made software the developer tested already, the code is effectively identical to what the developer ran, and is frequently basic or invisible to the user. Most updates are presumably good.
  2. A GUI can look obsessively neat and tidy, so everything can satisfy the obsessive preferences of a computer user. If you don’t like the color of something, that’s often a setting or line of code away from changing, and it’s a rewarding experience to explore it.
  3. Everything in a computer is logic-based. If something breaks, there’s always a logical reason for it, and the code/hardware has a predictable answer if you look hard enough. Reading documentation is geeky and technical, but effective.
  4. Every aspect of a computer is clean-cut. Language is articulated, computerized physics are simplified reproductions of reality, and distortions of perception are overlaid on top of the absolute information the computer already understands.
  5. Everything that’s “default” can be changed with the right programming.

By contrast, reality is messy:

  1. Updating something isn’t always easy. The very act of updating something is a violation of previously formed habits, and the changes are frequently as destructive as they are helpful. You can’t trust that an update is from a trusted source.
  2. Obsessively organizing and managing life is almost more trouble than it’s worth. There are always sporks, and it takes lots of time to establish and maintain an organization system. Even then, you might not have room or resources to keep everything immaculately categorized.
  3. Everything in life is perception-based. When things break, that’s often only a matter of perception. Even the atomized form of reality is bound up in uncertainty, and the primitives of perception itself are bound together with sentiment. We have no manual beyond whatever religion we use, and the various types of documentation frequently contradict each other.
  4. The physics and sociology that tie to absurdly mundane things (such as boiling water or having a conversation about the weather) are vastly more complicated than most people realize, so predicting precisely is far harder than it sounds.
  5. Some things are programmed automatically, and can’t be redefined. Death and taxes, for example.

This can create remarkable delusions when tech-minded people try expanding their worldview into the space beyond their computers, especially when Silicon Valley’s is heavily subjected to the Cupertino Effect.

One clear consequence of all this is that most tech people are politically progressive:

  1. Naturally, if the old-fashioned way of things is inherently inferior, there’s no reason that a simple modern solution can’t fix what everyone’s been complicating for thousands of years.
  2. And, more importantly, that would mean any new political solution we haven’t tried yet can work to fix humanity, as demonstrated by the models.

Most of them miss the fact that nothing whatsoever under the sun can technically fix the human condition. People will use technology to make life easier in a general sense, but some of them will use that same technology to cheat, break laws, violate the boundaries of other people, and kill them. Alfred Nobel’s and Albert Einstein asserted that dynamite and atomic weapons would end all war, respectively, and the view that any new technology will bring new morality with it is just as misguided.

Most prominently, the fields of AI and VR get the most delusional set of expectations attached to it. AI is an attempt to create life, and VR is an attempt to recreate creation itself. A well-trained human-like machine learning algorithm will have all the defects of humanity, and a complete virtual world will have all the defects of the world we presently live in.

Power

Since the industry works with information, its gatekeepers are the most powerful information-brokers on the planet. Information is the key to understanding anything, so the manager of information technology is the de facto gatekeeper to understanding. Every CEO of a large tech company has more knowledge power than the collective entirety of Ancient Rome, and far more efficiently without the breakdown in information transfer from before messages were sent electrically.

In practice, there are only several classes of individuals and groups who maintain all the power:

  1. Corporate executives who approve gigantic and revolutionary projects, with the intent to make lots of money through being the pioneer of an industry.
  2. Large corporations who capitalize on established, reliable technologies, with the intent to make lots of money through mass-produced distribution of those technologies.
  3. Small, individual developers who are lucky enough to become #1 or #2, with their own political agendas and management style changing as they ascend to power.
  4. Independent open-source developers who are typically too geeky to climb to social power, and often value complete software freedom enough that they’re not making a ton of money but contribute to an immense public good.

As long as young people obsess about trends, they’ll likely never see those power dynamics at play, and the cycle of power changes will repeat endlessly.

The only redemption to tech is that the trends move so fast that no singular corporation can theoretically corner the market, since their technology will become outdated as soon as they blink. There’s also enough pressure by the younger generation to advocate for open-source code, so those large entities must constantly shed at least some of their power to the masses to avoid all the smart kids condemning their business practices.

Like any popular form of power, the technologies of today will probably grow until people feel threatened by it. Then, other forms of power (like governments) will subdue it and regulate it, and everyone will move their attention to yet another technology that gives a new form of power. The internet or AI is doing it right now in the 2020s, and it may be augmented reality or biohacking in the future.

Counter-Culture

Typically, an open-source implementation arises once a company fails spectacularly at providing all the features and conveniences they pioneered. Some people in the industry have severe trust issues with the powerful movers and shakers of the technologies and want a free, open society. They range from libertarians to communists, but have a shared hatred of centralized control under the organizations presently running those systems.

These vigilante-style programmers tend to find solace in passion projects directed at things like open-source OS development (e.g., GNU/Linux) that work directly against the interests of FAANG. They tend to build free versions of what already exists, but continually give power to people within the public who are nerdy enough to read the documentation.

Their innovation is often a response to the power plays (e.g., making a video hosting alternative when the primary hosting solution becomes Orwellian), so FLOSS tends to follow the for-profit actions of FAANG companies. Their actions are, therefore, never typically on the bleeding edge of the trends, but can sometimes come mere months afterward. And sometimes, companies can screw up the paperwork revolving around the complexities of intellectual property and accidentally release something open-source that plants the seeds for a future competitor.

Many in the open-source community imagine closed-source will be overtaken by open-source (e.g., Facebook made React), but that reasoning doesn’t resonate with reality. People like to own things when they can profit off it, and people still find a type of profit in open-source through free marketing and free debugging.

After all, young people are willing to volunteer for a cause they believe in, even when it’s silly.


Further Reading

Awesome Falsehoods Programmers Believe in