The internet is a fascinating experience, but people often take it for granted and don’t know how it works.
Like most other aspects of computers, web development encompasses many sub-disciplines.
Every single “thing” on the internet is given a sequence of characters as a “uniform resource identifier” (URI) to indicate its location. Sometimes multiple URIs can point to the same thing, and other times a URI is simply routing to multiple other URIs. It’s simply a means of specifying a location for the computer.
Most network protocols work over the internet. Since it’s a worldwide network, various protocols and standards need to talk with each other, and their interoperability can be an extremely complicated endeavor.
To add to that complexity, there are also cybersecurity concerns from literally the entire planet. Every website essentially needs a signed certificate to be trustworthy. Even then, it’s absolutely crucial to develop healthy web browsing habits.
While computers maintain constant connection with each other (by human standards), the systems involved require constant asynchronous data transfer:
- Sometimes, website data is preloaded to ensure it’ll present itself.
- Other times, a resource represents as a placeholder (e.g., an image), and is then replaced by a more memory-consuming resource (e.g., a video).
Ever since the “Internet of Things”, just about every computer has an “IP address”. This means they have a code (IPv4 and/or IPv6) to distinguish them from all the other ones.
The “public IP address” is usually the internet-facing “router” or “gateway device” that the computer is connecting through, though it can be a VPN’s IP if you have one. Underneath that, there are multiple “private IP addresses” assigned to every computer on that network.
There are 3,706,452,992 possible public IPv4 addresses, and IPv6 has some technical hangups. Part of it comes from the fact that IPv4’s dotted-quad notation is on the upper threshold of standard human memorization, while IPv6 is absurdly hard to memorize. To that effect, IPv4 addresses have become somewhat of a tradable commodity due to scarcity.
However, IP management has become more complex in light of increased VPN usage. With a VPN, there’s another VPN IP to consider as well, meaning that IP blocking isn’t necessarily a good solution anymore to deal with people trying to access a website.
Most of the internet’s backbone for daily use is the HTTP protocol (hyper-text transfer protocol) and HTTPS (its secure version). Historically, it only sent HTML, but it now sends CSS, along with all sorts of more advanced code.
The entire HTTP system is designed as a host/client relationship, with the client being the requester and the host being the responder.
Client computers submit a request to a host/server computer with a URL, an endpoint path off that URL, and one of a several possible methods:
- GET – ask for information from a specified location (sent in the URL directly and therefore not securely sent)
- POST – send data to a server to create or update a resource
- PUT – send data to a server to create or update a resource, but the information is “idempotent” (i.e., it won’t create multiple instances of the resource if sent multiple times)
- HEAD – same as GET, but won’t return any information, great for testing
- DELETE – deletes a specified resource
- PATCH – like PUT, but only applies partial modifications to a resource
- OPTIONS – describes communication options for a specified resource
- CONNECT – starts a two-way commmunication (a “tunnel”) with a resource
- TRACE – performs a message loop-back test to test the path to the target resource, also great for testing
The request may optionally also include a body, headers, query strings, and the HTTP version.
The server/host computer interprets the information, then sends a response back with its protocol version, headers, a status code, and status text. The status codes/text have mostly been around since HTTP/1.0, but some have been added since HTTP/1.1:
- 100s – the host received and understood the header information, and tells the client computer to wait.
- 200s – the request was received, understood, and accepted
- 200 OK, the standard response, though a GET request will give something connected to the requested resource and a POST will give something that describes or contains the result of the action
- 201 Created a new resource
- 202 Accepted, but hasn’t been fulfilled yet, meaning it may be disallowed when processing later
- 203 Non-Authoritative Information because the server is a proxy to speed up data transfer but has a modified version of the original response from the primary server, new to HTTP/1.1
- 204 No Content to return
- 205 Reset Content request back to the client
- 206 Partial Content because the client’s “range header” isn’t all the information (e.g., resuming interrupted downloads, splitting a download into multiple streams).
- 207 Multi-Status for complex WebDAV needs that require multiple separate response codes (RFC 4918)
- 208 Already Reported for WebDAV, so not including again (RFC 5842)
- 226 IM Used, as in an “instance manipulation” for delta encoding (RFC 3229)
- 300s – the request is being redirected
- 300 Multiple Choices for the client to choose (typically the user)
- 301 Moved Permanently somewhere else, and everything should redirect to the new URI.
- 302 Found, though was once called Moved Temporarily, that indicates where a resource is moved to (RFC 1945 originally, but clarified as 303 and 307 in HTTP/1.1)
- 303 See Other, where the response can be found with another URI with the GET method, and a POST method means the server received the data and the client should issue a new GET to the new URI
- 307 Temporary Redirect to another URI and repeat the request, but future requests should use the original URI
- 304 Not Modified from the client’s copy, so there’s no need to re-transmit the resource
- 305 Use Proxy since only a proxy has the information, used since HTTP/1.1 but many web browsers don’t obey this code because it’s a wide-open risk for hacking
- 306 Switch Proxy, which originally meant that subsequent requests should use the specified proxy, but is no longer used
- 308 Permanent Redirect to a given URI, which is the same thing as 301 but doesn’t allow the client’s HTTP method to change.
- 400s – the client computer has an error
- 400 Bad Request because there was an apparent, but uncertain, client error
- 401 Unauthorized is similar to 403 but authentication wasn’t provided or has failed
- 402 Payment Required was reserved for future use, but hasn’t been implemented
- 403 Forbidden means it was a valid request, but the host is refusing to act on it
- 404 Not Found means the requested resource couldn’t be found, but that’s it
- 405 Method Not Allowed because the request method isn’t supported for the requested resource
- 406 Not Acceptable because the resource doesn’t conform to the requested resource parameters
- 407 Proxy Authentication Required before accessing the resource
- 408 Request Timeout because the server waited too long for the client
- 409 Conflict because of the current state of the resource (e.g., being edited by 2 different computers)
- 410 Gone, meaning it was previously present but is no longer available and won’t be available again
- 411 Length Required because the request didn’t specify a length
- 412 Precondition Failed because the host doesn’t fulfill one of the request header preconditions
- 413 Payload Too Large for the server to process
- 414 URI Too Long for the server to process
- 415 Unsupported Media Type for the server or resource to support
- 416 Range Not Satisfiable for the portion requested in the range header
- 417 Expectation Failed within the request header’s Expect field
- 418 I’m a teapot was an April Fools’ joke that shuld theoretically return a teapot’s request to brew coffee (RFC 2324 and RFC 7168)
- 421 Misdirected Request toward a server that can’t produce a response
- 422 Unprocessable Entity because the request was well-formed but semantic errors made it impossible to follow
- 423 Locked from access
- 424 Failed Dependency in a WebDAV configuration because it depended on another request that had failed (RFC 4918)
- 425 Too Early for the server to risk processing a request that might have to be replayed (RFC 8470)
- 426 Upgrade Required to the current protocol (e.g., TLS/1.3), specified in the Upgrade header field
- 428 Precondition Required because the server requires the request to be conditional (RFC 6585)
- 429 Too Many Requests by the user in a given amount of time, typically used for “rate-limiting” (RFC 6585)
- 431 Request Header Fields Too Large because the server won’t process that much at once (RFC 6585)
- 451 Unavailable For Legal Reasons because the server operator can’t legally permit it (RFC 7725)
- 500s – the host computer has an error
- 500 Internal Server Error because there was an apparent, but uncertain, host error
- 501 Not Implemented because either the host doesn’t recognize the request method or can’t fulfill the request
- 502 Bad Gateway because the server received a bad response from its upstream server
- 503 Service Unavailable because the server can’t handle the request
- 504 Gateway Timeout because the server was a gateway or proxy and didn’t receive a timely response from its upstream server
- 505 HTTP Version Not Supported because the HTTP version in the request wasn’t supported
- 506 Variant Also Negotiates because the “content negotiation” between multiple variations creates a circular reference (RFC 2295)
- 507 Insufficient Storage on the server to complete a WebDAV request (RFC 4918)
- 508 Loop Detected while processing a WebDAV request (RFC 5842)
- 509 Not Extended, but the server needs further extensions to fulfill the request (RFC 2774)
- 511 Network Authentication Required by the client to gain network access, typically from intercepting proxies that control access to the network (e.g., “captive portals” to require agreement to Terms of Service) (RFC 6585)
- Further, there are plenty of other unofficial 400 and 500 codes that aren’t supported by any standard.
Optionally, the response may also include a body, which can sometimes represent a lot of information, context-depending.
The standards for HTTP have moved around and are constantly improving. HTTP/1.1 was a great idea but had issues, and HTTP/2 and HTTP/3 are becoming increasingly better.
Web Domains & DNS
When you type in SomeWebsite.com, each of the domains has a “top level domain” (TLD):
- Many, many, many more
These TLDs are authorized by ICANN, a nonprofit-in-name-only that issues those domains. Different organizations can buy them, then sell the subdomains at a fixed price. Some TLDs (like .edu or .gov) are only granted to specific authorities, but most of them are free-for-all first-come first-serve.
Below that, there are second-level domains (e.g. somewebsite.com). Further than that, there are also subdomains.
DNS configurations define a huge part of whether a website works or not. The DNS system has “resource records” located in “zone files” that specify where resources are located:
- There are many, many DNS records, but the bare-bones specifications mostly exist as standards in RFC 1035.
- These records can be for specific resources, or “wildcard” across many possible situations.
- A – what IPv4 address to look for.
- AAAA – what IPv6 address to look for, specified by RFC 3596.
- CAA – Certification Authority Authorization, specified by RFC 6844.
- CNAME – canonical name record, which indicates an alias of one name to another.
- DNAME – delegation name record, which is like CNAME but includes all subnames as well.
- MX – mail exchange record, specifies the servers that accept email for a domain, specified by both RFC 1035 and RFC 7505.
- NS – name server record, which indicates who the “authoritative” registrar is.
- The domain resides at a registrar, which is often not where the computer files are hosted.
- PTR – pointer to a canonical name.
- The client will typically look for an index.html or index.php file at that location.
- TXT – was originally designed for human-readable text, but has devolved to a miscellaneous junk drawer for machine-readable data.
The software (usually the web browser) makes DNS requests, which are answered with DNS records. The “fully qualified domain name” (FQDN), which is often the “uniform resource locator” (URL) (e.g., https://somewebsite.com), should eventually connect to an IP address somewhere.
Sequentially, the computer will access several computers that sit potentially across the world:
- Since it’s very quick compared to anything else, the web browser first looks within cache memory locally stored on its own computer. Later, it’ll also typically store more information to the cache.
- Access the name server of the internet service provider (who provides the paid-for internet service) about the DNS request.
- If a name server happens to be recursive, it’ll find the answer for the client, starting with its cache. Otherwise, it’ll redirect the query to somewhere else.
- If the name server doesn’t have the information, it’ll ask the root server, (also known as the Global Top Level Domain Server (GTLD)) to find the name server that has the relevant name server (e.g., there’s a GTLD for all .com domains).
- The GTLD will send back information that routes to the “authoritative” registrar (Step 6) or a DNS resolver.
- The client accesses the DNS record held at a DNS resolver (e.g., CloudFlare at 188.8.131.52, Google at 184.108.40.206, Quad9 at 220.127.116.11).
- Most internet service providers have awful built-in resolvers that keep very obsolete DNS records, so it makes sense to connect to your own (e.g., Cloudflare, Quad9).
- It’s not too hard to make your own DNS resolver, but most DNS resolvers used by most websites are gigantic FAANG corporations.
- The DNS resolver will point to a registrar, which has the authoritative DNS record.
- When there are only a few large DNS resolvers, small failures can quickly turn off huge chunks of the internet because the computers will return a 500 error with no re-routing.
- The registrar will either have resource records, or an NS record that points somewhere else, typically to a host (e.g., ns1.actualsite.com and ns2.actualsite.com).
- If the resource records are part of the domain itself, they need “glue records” that specify exactly where that record is (e.g., ns1.adomain.com is inaccessible before getting to adomain.com).
- Typically, most of the specific DNS records will be wherever a website is hosted, with the corresponding IP addresses or connected resources at least somewhat referenced there.
- Further, there are extra security aspects, such as SSL/TLS and DNSSEC, that make malicious hacking more difficult.
- The difference expresses most simply on the domain as either http:// or https://.
- While SSL is very fast, TLS on the other hand has been rather slow by comparison.
The DNS might be a simple reference from a domain to a specific IP, but it can (and often does) get much more complicated:
- somewebsite.com – the www is removed, mostly for cosmetic reasons
- www.news.somewebsite.com – a subdomain of somewebsite.com
- www.awebsite.otherwebsite.com – awebsite.com stands on its own, but it’s an add-on of otherwebsite.com
- http://somewebsite.com may send the browser somewhere differently than https://somewebsite.com
Also, not all domains are equal:
- Some of them require existing conditions to be qualified as a domain (e.g., .org, .us).
- They all have a maintenance fee:
- Most sites like .com and .net are ~$10-20 annually.
- Others cost more (.ai is ~$90, .inc is >$1,000 and climbing).
- Some are cheaper or free (.xyz is $12, .tk is free).
- To that end, many people who scam, hack, and host pirated content like using cheap/free, disposable domains, which may affect a domain as a marketable address when other legitimate organizations block the entire domain.
- Some domains (e.g., .de) will validate before registration is approved.
- Some domains require more information (e.g., .us requires submitting a connection to your legal fiction).
The entire DNS system has had some improvements as well:
- Core DNS was made in the 1980’s when the internet was smaller, so the 1990’s saw DNS Security Extension (DNSSEC), which adds encryption based on public key cryptography to DNS standards.
- VPNs route traffic to a given website through their IP address, but the DNS resolver still sees the raw IP address of the original client computer, even if it’s using DNSSEC.
- DNS over HTTPS (DoH) and DNS over TLS (DoT) have encrypted the information to the DNS resolver, but the resolver can still see the site the client is trying to visit.
- Oblivious DNS over HTTP (ODoH) obscures what the DNS resolver sees.
Obviously, not everyone knows about SomeWebsite.com. The information on that site may be critically important to someone, but how would they know? The internet is a big place, and there are a lot of possible domain names!
To fill in the gap, some tech people created a “search engine”, which is software designed to organize and categorize everything. Search engines are a relatively simple concept, but very powerful:
- Use a “webcrawler” to dig through websites and find “keywords” that the user may want.
- Sort and process the information to create database associations between the keywords and “hyperlinks”.
- Run a search algorithm whenever the user requests a keyword or phrase.
- Collect data off the users’ interactions with that website:
- Downgrade sites when people simply “bounce” onto the page and off again.
Most search engines are so powerful that they tend to “web scrape” entire websites as well. This is particularly egregious in the case of large groups like Google and Facebook.
Since businesses want to make lots of money, there’s an entire division of marketing that specializes in nothing but “search engine optimization” (SEO). It involves a lot of technical back-end:
- Add your domain to all the major webmaster consoles (e.g., Google, Yandex, Bing, etc.).
- Generate an XML site map and submit it on those consoles.
- Add an SSL certificate and make sure it’s sending secure information.
- Add links all over the site that link to other parts of the site (~3-4 links every 1500 words).
- Do absolutely everything you can to improve web/app accessibility.
- Make sure your UI is properly color-matched.
- Verify the input fields match up (e.g., make the “phone number” box only work with numbers and have (xxx)xxx-xxxx in it)
- Describe all images with the alt tags (the “alternate” tag if the photo doesn’t show up).
- Use short URLs that don’t have dates in them and are never more than 2 subfolders deep.
- Set social media image, title, and description (e.g., Open Graph protocol for Facebook, viewport meta tag for mobile).
- Do absolutely everything you can do speed up the website, both for users and for webcrawlers:
- Test everything with emulators and real hardware for all sorts of weird “edge cases”.
Obviously, some less-than-legitimately-motivated people have wanted to exploit the situation. If you’ve ever clicked through to a weird webpage that says “Here’s how you can have the best bicycle tires that are tires for biking, of which bicycles are the best you can bicycle to and from work for bicycling”, you’re looking at SEO gone horrifically wrong. Thankfully, the engineers behind the search algorithms usually punish sites that hack their algorithm in the long-term, so we can still get high-quality results.
While some web browsers emphasize cybersecurity and others emphasize speed, they almost always do just about the same thing: give a safe and fast web-browsing experience that also doesn’t download bloated, broken or malicious code.
To keep track of a browser (such as logging in), the “host” will often send over files called “cookies” to save specific information (like your login or shopping cart) inside the browser cache.
- This is very convenient, since it can allow someone to stay logged-in, tailor the website to specific types of users, or keep track of information about the web browser to make the browsing experience more seamless.
- However, stored cookies can also lead to privacy issues, especially when third-parties (like advertising companies) can track behavior across websites or when hackers track where you’re browsing.
The Tor protocol adds layers of anonymity to the web-browsing experience, which means more layers of complexity.
Websites have degrees of complexity:
- The most basic website simply conveys information (e.g., this site)
- When the site creates a public forum for ideas, it becomes user-driven (e.g., a comments thread)
- Some sites are almost entirely user-made content (e.g., Wikipedia)
- The pinnacle of complexity comes when the site creates complicated rules and permissions for viewing other users’ content (e.g., most social networks)
While the user experience of mobile devices often separates the experience of internet-browsing into a wide variety of “apps”, most of those apps are simply a variety of specialized web browsers inside the operating system (e.g., Electron app).
Differently-sized screens and inputs are not trivial issues! The wide variety of screen implementations means the user could be using a mouse and keyboard on an office machine, their cell phone or tablet with their fingers, or an interactive VR headset. To make it simpler, developers sidestep the pixel measurement in lieu of a root em (or “rem”) measurement based on a basic font size (typically 16 pixels).
At one time, near the year 2000, almost every internet-enabled computer was working on about a 1024×768 screen. Now, they can range from 480×320 through to 3840×1080 (and growing), with all sorts of odd rectangular shapes (with VR and driverless automotives adding even more variety). This adds a layer of challenge to good web design, but has a few simple tricks:
- Make elements move and resize relative to the screen edges, such as with a percentage or with a float command.
- Use fluid grids to keep everything in place as things move around.
- Drop out or replace elements with “media queries” as the screen size hits a “breakpoint” when it gets too small or large.
- Focus on mobile-first or desktop-first design (depending on what you’re designing), then work toward filling it in on the other.
- Program a “CSS reset”, where the default design gets replaced by the developer’s designs.
To maintain a workable UX across the wide variety of screens a web browser may use, front-end web developers use “media queries”, which are instructions to change or add/remove elements depending on the size of the screen. This is called “responsive web design”, which you can see simply by going to a web page with media query code (like this one) and scrolling in and out (either by pinch-zooming or CTRL and +/-).
Visual web design is an art form of its own. Besides all the conventions of user experience, the site must be SEO-friendly (i.e., low-enough memory to quickly transfer across the internet). This means all aspects of programming web graphics is a Zen art: enough to get the point across with as little data transfer and distraction as possible.
One of the easiest ways to design motion into websites is by using a CSS feature called “keyframes”. Instead of getting in the weeds with graphics, developers can indicate multiple states with percentages, then the language has enough logical power to deduce the shifts:
- State A is 50 pixels wide and red at 0%.
- State B is 100 pixels wide and blue at 50%.
- State C is 300 pixels wide and green at 100%.
- The entire element will shift from red, double its size and shift to blue, triple its size and shift to green, then cycle back in reverse, over and over.
It’s worth noting that CSS is not the only way to design graphics. For certain use cases, other 2-dimensional graphical representations like Canvas are better options. And, for 3-dimensional graphics, shaved-down game engines like Unity and OpenGL work better.
While it’s barely detectable to most people, web browser pages have a “favicon” logo sitting in the web browser tab (or in the Task Manager tray in the operating system). This little square logo ranges from 16 to 64 pixels wide and shows up in a lot of places in a web browser including web searches, history, and dropdown menus.
To make apps more “cross-platform”, they’re often SPAs (“single page apps”) to cut down on various interface differences. SPAs are literally only 1 webpage (this other site I made is an example). They can interchangeably be an app or webpage, since you won’t need to worry about linking it to any other pages.
In fact, Electron apps are effectively working with all the inefficiencies of a web browser. While it’s great at building a quick cross-platform solution, the greatest efficiencies will come in rewriting the software for each of those particular operating systems.
The internet is constantly changing and adapting, and each technology is constantly adapting to other technology improvements. In effect, that means this summarized essay may be mostly obsolete in 10-20 years.