6. Minified
Minifying is the reducing of unnecessary content to minimise transmission and browser-processing times.
The web is largely text that is interpreted when needed. All text that is not actually required is best not transmitted so it doesn't clog up data lines nor require time for browsers to parse it just to ignore it. This is minifying and can be done in many ways.
HTML, CSS and JavaScript all live in text files, and usually include comments that help the developers understand and debug them. Developers also add in a lot of spaces and other whitespace to make it easier for them to read the content. All these are absolutely unnecessary for the browser's job, so don't need to be transmitted to it. These are usually filtered out, or should be.
The usual programming practice is to use descriptive names that are often quite long. Ideally, these should be reduced to minimal sizes before transmission, though while this is common for JavaScript, CSS is often left with very long class names, such as one with 114 characters in the YouTube home page. I prefer to use short names but describe them in comments, along with minimal whitespace, though I have the advantage that I don't have a team that has to be used to my way of working.
The next step is only requiring what's needed for the type of page. Commonly, CSS and JavaScript are split into multiple files and only what is required is loaded, though pages used for multiple views end up having all the files regardless of whether a particular view needs it. Just have to use the browser developer tools to look at Google's very plain home page to see that there is a lot loaded that isn't initially used. While they may be cached, they still require parsing with each request.
One problem with separate files for CSS and JavaScript from the HTML is that delays in those files reaching the browser can result in unformatted pages being shown briefly before those files reach the browser. The product instead loads these as part of the page, bypassing the non-formatting, but also allowing hashing of the CSS and JavaScript to prevent many attacks.
Loading with the HTML means that their contents can be filtered according to what is actually to be rendered, rather than what might be, which can be significant for tables in CSS or sequences in JavaScript. However, it is a tradeoff between granularity and processing time and complexity. For example, the CSS for tables could be defined down to how many columns are used on a page but it is unlikely to provide significant enough time benefits.
Many pages share much of these files in common. Typically, that is handled by having lots of files where each covers some common functionality while others cover unique functionality. While that arrangement works well with teams that look after the files they are responsible for, it would be too confusing for me to keep jumping around between them. I chose to use a master file for each type of content, like CSS, JavaScript, PHP or XSL, and then use specially-formatted comment lines with regular expressions that signified the page types each block was to be used with.
This allows me to have an overview of all the content while micromanaging where each is block used, down to individual lines. A JavaScript file then processes all the files, including calling VB functions in the Excel file, by splitting them up by the regular expressions, then filtering out comments and excess whitespace. This is like conditional compilation in that it allows customisation of content and is done offline from runtime, streamlining request time processing and performance.