To the main heading-

Smallsite Design

Technology

8  Minified

!

Minifying is the reducing of unnecessary information to minimise transmission and browser-processing times.

The web is largely text that is interpreted when needed. All text that is not actually required is best not transmitted so it does not occupy data lines nor require time for browsers to parse it just to ignore it. This is minifying and can be done in many ways.

HTML, CSS and JavaScript all live in text files, and usually include comments that help the developers understand and debug them. Developers also add in a lot of spaces and other whitespace to make it easier for them to read the content. All these are absolutely unnecessary for the browser's job, so do not need to be transmitted to it. These are usually filtered out, or should be.

The recommended programming practice is to use long descriptive names. Ideally, these should be reduced to minimal sizes before transmission, though while this is common for JavaScript, CSS is often left with very long class names, such as one with 114 characters in the YouTube home page. The problem with having separate minified files is that names in them are usually quite different from the source files, making it hard to reconcile runtime errors back to the original code. When an error indicates the problem code, I can just copy the code and search for that exact text in the source.

So, instead of having a myriad steps on the round-trip of debugging, I just learnt to use very short names, document them in comments which are stripped out anyway, and just not put in the whitespace to begin with. Having too many steps is creating more opportunities for errors. However, I do have the advantage that I do not have a team that would have to learn my way of doing things, or reach a compromise with, both of which would likely result in more errors being made.

The next step is only requiring what's needed for the type of page. Commonly, CSS and JavaScript are split into multiple files and only what is required is loaded, though pages used for multiple views end up having all the files regardless of whether a particular view needs it. Just have to use the browser developer tools to look at Google's very plain home page to see that there is a lot loaded that is not initially used. While they may be cached, they still require parsing with each request.

One problem with separate files for CSS and JavaScript from the HTML is that delays in those files reaching the browser can result in unformatted pages being shown briefly before those files reach the browser. The product instead loads these as part of the page, bypassing the non-formatting, but also allowing hashing of the CSS and JavaScript to prevent many attacks.

Many pages share much of these files in common. Typically, that is handled by having lots of files where each covers some common functionality while others cover unique functionality. While that arrangement works well with teams that look after the files they are responsible for, it would be too confusing for me to keep jumping around between them. I chose to use a master file for each type of content, like CSS, JavaScript, PHP or XSL, and then use specially-formatted comment lines with regular expressions that signified the page types each block was to be used with.

This allows me to have an overview of all the content while micromanaging where each block is used, down to individual lines. A JavaScript file then processes all the files, including calling VB functions in the Excel file, by splitting them up by the regular expressions, then filtering out comments, indents and line feeds. This is like conditional compilation in that it allows customisation of content and is done offline from runtime, streamlining request time processing and performance. These processed files are then what is included in a version of the product.

Content-dependent processingβ–³

Including the CSS and JavaScript with the page allowed conditional inclusion of element-specific statements depending upon whether they exist in the article.

When CSS and JavaScript are in separate files, they are sent after the article content has ceased to be available. That means that there is no straightforward way to modify either of the files to cater for whether particular elements exist in the file, or cater for individual elements.

Incorporating the CSS and JavaScript into the page itself means that the article content is available for determining whether particular elements are used in the article and thus whether to include the CSS and JavaScript it requires. PHP is used to implement the logic required in exactly the same way it is used for processing the HTML of the page. It is as simple as testing for the element's existence and if they are, including the statements required.

However, it is a tradeoff between granularity and processing time and complexity. For example, the CSS for tables could be defined down to how many columns are used on a page but it is unlikely to provide significant enough time benefits.

Catering for individual elements is more involved. Each element needs to have a unique id attribute, but it requires coordination between the CSS or JavaScript processing and the XSLT processing done after them, as some elements may be disabled. This means that the iteration routines processing the CSS or JavaScript and in the XSLT have to have exactly the same filtering, so that the same elements in each use the same ids to ensure that they only have the CSS or JavaScript meant for them. This is done by using PHP functions in XSLT.

Technically, it could have all been done in the XSLT processing by conditionally incorporating the required CSS and JavaScript statements, but that would have meant those would be distributed all throughout the XSL file. CSS can be used by many elements, but those elements may be processed by different XSLT templates, making it a many-to-many situation requiring many more XSLT templates and code. Generating the hash codes would have also been more complicated and made everything less flexible to implement. It is about choosing the least complicated places to do processing. Horses for courses!

See Resource usage for how all this affects what is loaded with a page.

Links   △Latest articles&β–³
  • β€’Locales and the CLDR
  • β€’Fluid design
  • β€’Iterating to a solution
  • Subsite linksβ–³
  • β€’
  • β€’Contact   
  • β€’Categories   Feed   Site map
  • Searchβ–³

    This site does not store cookies or other files on your device when visiting public pages.
    External sites: Open in a separate page, and might not respect your privacy or security. Visit them at your own risk.
    Powered by: Smallsite Design  ©Patanjali Sokaris   Manage\