- 1. Software is a poor organizing principle for digital production.
“What program do you use?” is a question I often get about the slides I use to present my work. I have concluded that the proper answer to the question is to counter-suggest the asking of a different question, “What principle do you use?” John Maeda, The Laws of Simplicity
It is alarming when software, commercial or otherwise, comes to signify entire digital genres. Compare the number of results in a search engine for PowerPoint best practices versus slideshow best practices. The results suggest that vendor lock-in has as much of a grip on how people talk about production as it does on what actually gets produced.
Lo-fi systems like S5 and Reveal.js are well suited to the rhetorical situation of the slideshow, which shares with all digital productions a defining characteristic: uncertainty. Slideshows are commonly projected on unfamiliar computers (often, it seems, with dubious maintenance records) that a speaker might have access to only shortly before speaking. Will that computer have PowerPoint installed? The right version of PowerPoint, at that? If not, will the logged-in user have sufficient privileges and a network connection to download the PowerPoint viewer? If all else fails, will a reasonably competent IT person be present to step in and help?
Such problems, rooted in the inflexible digital materiality of the PowerPoint file itself, are easily avoided by lo-fi alternatives like S5: Even if the computer runs an outmoded browser (and what computer doesn’t have a browser installed?), S5, Reveal.js, and other well constructed lo-fi slideshows will operate more or less as planned while remaining editable in any text editor available. Speakers can even keep their slideshow and a portable version of Firefox on a USB drive, should Internet access be sketchy or fail outright.
As that simple example shows, looking beyond the apparent inevitability of software like PowerPoint brings the aims of the digital genre itself into focus. It invites a more flexible, rhetorical approach to production than focusing on the features and limitations of a given piece of software.
In the classroom, software should therefore not be selected based on its high-end features or the size of its installed user-base in corporate settings. Instructors should resist assuming that they better prepare students for the workforce by teaching exclusively the most commonly used word processor or page-design software whose interfaces are wildly unstable and intrude upon thinking deeply about production problems. Those who teach should instead lead students in working through approaches and technologies that foreground the rhetorical situation of digital production, especially the uncertainty that software like PowerPoint attempts to paper over. Rhetorically focused instruction establishes familiarity with the affordances and constraints of open standards and formats and admits of the many uncertain and unknowable factors that determine how a digital artifact will be accessed and displayed. Yet inspiring examples like Reveal.js demonstrate how rewarding and liberating it is to learn to command open technologies through written language.
- 2. Expression should not be trapped by production technologies.
Every platform tells you that it’s the best, that it is worthy of your time and attention. But there’s always another platform. Karen McGrane, Content Strategy for Mobile
Too many software programs create roach motels for content and information: The data checks in via File > Import, or a file-upload dialogue box on a web application, but it never checks out. Such digital artifacts—the PowerPoint, the PDF, the word-processor document—are only marginal improvements over the entrapped quality of analog, print information. In many ways, such as non-negotiable dependence on a specific piece of software to view the artifact, software programs are actually steps backward from the comparatively open access that books and other printed matter provide.
The author-privileged focus of closed, roach-motel formats and WYSIWYG software is explicit in the latter's acronym: What YOU See is What YOU Get. As though YOU, the author, were the only one who mattered in the digital rhetorical situation. If it looks good for me in Dreamweaver or my desktop browser, so the logic goes, it must look good everywhere for everyone. At a time when screens range between postage-stamp-sized wearables and 88-inch ultra-high-definition televisions, it is lunacy to assume that what the creator of a work sees is what everyone, or really anyone, sees. The tireless pursuit of a 1:1 match between what appears on screen for an author and what's received, eventually, by a reader is a pernicious artifact of print culture deeply embedded into the interfaces of even early page-design software.
People creating digital work for others should be far more concerned about what the audience gets than what it sees. What audiences should get is flexible, open formats. Writing: the luxury of well-crafted source code that the reader’s own device will render, to the greatest extent of its capability. The Web and even the Creative Commons are efforts steeped in the promise of openness. But a Creative Commons (CC) license that allows for derivative works of, say, a Web-available PDF is an oxymoron at best: just try to extract an archive-quality image from a PDF file, or to listen for coherence as an audio screen-reader meanders unpredictably through a multi-column document. In those cases, the CC license emphasizes gestures of openness over careful preparation of digital artifacts with a genuine capacity to support derivative works, or even basic device- and ability-neutral access.
To make digital projects genuinely friendly to derivative works, it needs to be maximally flexible (cut and paste does not count). A version-control repository containing the lo-fi elements of plain-text and single-media files, and their history, is the most generous expression of flexibility. It recognizes that an unknowable group of users and their devices should be able to, one day, rework the content of different media elements. The repository serves to recognize that there may be other platforms and production approaches in the future. The digital creator’s responsibility is to reference and orchestrate elements that can be accessed in a combined or piecemeal fashion: only then is a CC derivative-works license viable, or even honest.
Any given digital artifact needs to be constructed not as a final resting place for an idea or some information, but as a pause in a stream of further, unfettered access and revision. A web page listing an organization’s members’ names and email addresses, for example, can be made more open through the use of microformats. Rather than cutting and pasting the contents of the page, or returning each time the page’s information is needed, a user can detect the presence of the h-card microformat with a parser for microformats. It would then be possible to import some or all of the membership’s contact information directly into her own email address book. Should electronic address books become microformat-aware, the address book could query the URL containing the contact information and update entries automatically.
Digital works should long outlast the software that played a role in their creation. Insisting on open standards and formats, not software packages, from the moment of authorship to the moment of reader access is the only way to make that happen. People creating digital work should value the command of lo-fi technologies at the code level: not in service to machines, but in kindness to other human beings whose specific technology access and physical ability are ultimately unknowable.
But few people doing lo-fi production will need to consult the specifications directly in the normal course of production. Community-maintained documentation, such as the Mozilla Developer Network and WebPlatform.org, are better organized and presented with essential information relevant to many production problems.
- 3. Value research and learning over intuition and reflex.
When one steps back from the marketplace, things can be seen in a different light. While time passes on the surface, we may dive to a calmer, more fundamental place. There, the urgency of commerce is swept away by the rapture of the deep.... Form, structure, ideas, and materials become the object of study. Brenda Laurel, Design Research: Methods and Perspectives
Expertise is the price to be paid for intuition and reflex, the two central benefits of well-designed GUI-driven production software. Together, intuition and reflex make for easy software that’s fun to use. There is no question about that. And when it comes to apps for personal use, from email and messaging to social networking and gaming, software absolutely should be intuitive and provide the kinds of interfaces and visual cues that make for reflexive, unstudied use, for users coming from more than a handful of particular cultural and socioeconomic backgrounds.
The problem is that the market for GUI-driven software has conditioned people to expect the same ease everywhere. To be sure, there should be no need to write source code just to check email or to book a restaurant reservation online. But that does not mean that no occasions exist when writing source code is absolutely necessary. A lo-fi approach rejects intuition and reflex in exchange for the uncomfortable uncertainty and time-consuming struggles of research and learning. Intuition and reflex are only for today, for the person making something. What is being made, and for whom, are always different problems, project to project. Research-driven lo-fi production is not just about investigating the how of production that visual interfaces embed and make intuitive. Lo-fi production directly addresses essential audience-driven concerns of digital creation that are also the most stable and sustainable: the what and the why, under the human and technological constraints of for whom.
Lo-fi methods open access to the languages and methods of production obscured by and embedded in visual interfaces. Production approaches anchored to open, standardized languages have a longer shelf-life than those embedded in GUI-driven software. The essential properties of HTML 4.01 in 1997 are identical to HTML5 in 2016, but there is no penalty or accessibility cost associated with writing HTML 4.01 today. The same cannot be said for Microsoft Office 97 and its current version: As of December 2015, there are over five million Google results for how to open a Word 97 document. Word 97 knowledge is as defunct as the objects that it produced, which more than a handful of people have struggled to access.
Although languages, like software, are subject to change in future releases, languages retain their essential character version to version. So too do the essential text-based interfaces of command-line applications on Unix-like operating systems:
cd; lswill always change to your home directory and then list its contents. The markup languages SGML, HTML, and XML look and behave very similarly, for another example, despite the fact that SGML was developed in the 1960s and standardized in 1986, and XML in 2000. To learn any one of those languages is to have learned the others.
Or more accurately, to learn any one markup language is to learn about the general idea of markup languages. It is foolish and certainly difficult to confidently write more than a few lines of HTML without referring to a solid reference, such as the HTML element reference maintained by the Mozilla Developer Network. Consulting and researching an element reference does more than explain what to type: It opens up ways of thinking about individual elements and their histories as well as their ongoing development. Research transforms production, making it as much an object worthy of study as the content it’s meant to convey.
Learning builds on research, but the deeper learning of the greatest value requires stability. The stability of computer languages is due, in part, to common ancestors. For example, there are few scripting or programming languages that are not at least influenced by C. Learning one language on a family tree is inherently preparation to learn others. Even languages that are essentially unrelated (say, CSS and PHP, or HTML and Ruby) share much of the same meta vocabulary and concepts: lines of styles in CSS are terminated with a semicolon as are lines of PHP code. Nested tags in HTML resemble statements that are nested in Ruby. Prepared with that sort of vocabulary, people engaged in lo-fi production can develop mental models for how languages operate in conveying a particular idea in service to diverse audiences. They can leverage exacting Google searches to research and solve a wide range of production problems. They can, with the time and patience, achieve the highest levels of thinking valued in the humanities and other academic disciplines: theory, reached via abstraction and contemplation based on studied, deliberate experience.
- 4. Design first for the most constrained users and devices.
Use progressive enhancement so people can access your site’s content even on a device that doesn’t support certain features. Optimize so it downloads fast. Insert media query breakpoints where it’s appropriate for the content, rather than based on widths of common devices. Anna Debenham, “Testing Websites in Game Console Browsers”
There is no better way to lose the good will of audience members than to bombard them with a series of messages demanding the installation or upgrade of software and plugins or, worse, to announce that their equipment (and, perhaps by extension, financial status or physical ability) is wholly inadequate and beyond toleration. Worse still is no message or warning at all: just a blank screen or hopelessly malfunctioning digital artifact.
A poor technological choice that denies access to anyone, for any reason, is ultimately a rhetorical problem—particularly when there are lo-fi technologies, like web standards, that address issues of access by design. Lo-fi production approaches afford an opportunity to raise our expectations of one another and to research and assume responsibility for all of the rhetorical concerns that comprise the digital medium—not just those that are easy, obvious, or convenient.
Lo-fi production technologies provide a foundation for delivering artifacts that are editable everywhere, and accessible everywhere, too. But they still require a thoughtful approach: designing first for the most constrained users and devices. Without exception. Accessibility is not some drudgery to be filled in only after the rest of the work has been done.
People with the most sophisticated whiz-bang production knowledge, or the most expensive GUI-driven software, are also typically privileged to enjoy the fastest computers, the most recent generation of smartphones, the highest-resolution displays, the speediest network connections, and the most generous mobile data plans. But that is not the way most of the world is equipped. In acts of production, it’s better to assume that none of the rest of the world is equipped that way.
Make a habit of producing a single artifact across as many different computers and devices as you can get access to. Nothing will make you rethink your production approach more than 30 minutes on a dilapidated hotel-lobby computer used mostly for printing boarding passes. Throw a different operating system into the mix by running Linux off of an external hard drive on your primary computer. Make sure it doesn't have your usual typefaces or software, then get to work. Make every word and every line of source code count. Make every byte of a media file work hard to justify the time and resources necessary to download it. And if it cannot, get rid of it.
Once you have a world-accessible draft, test it everywhere: the public library, the mobile-phone store, the big-box retailer’s electronics department. And not on the really expensive stuff, either. Choose the cheapest laptop loaded with the most awful bloatware. The mobile phone with the smallest, ugliest little display. Disable, if you can, the LTE internet connection to see the 2G world of the people who’ve already burned through their pitiful three-gigabyte LTE data allotment for the month, perhaps thanks to a monstrous PDF file someone sent as an email attachment without thinking for a moment what consequences that might have. Get a real, lived sense of just what it is that other people might see when they access the thing that you’ve created.
It is from that solid baseline that additional features and functionality can be added, in an unobtrusive way that benefits those who are able and can afford to experience them, without penalizing those who cannot. Readers of accessible, lo-fi artifacts will appreciate not being told what they must do (even if they are left blissfully, mercifully ignorant of the enhanced coolness they may be missing out on); and people producing well-researched digital work can develop content and ideas with far greater confidence in ethical audience access than WYSIWYG software will ever provide.
- 5. If a hi-fi element seems necessary, keep researching until you conclude that it isn’t.
We do not have an interoperable Web. What we have is a glut of proprietary, closed, and protected stuff. While it’s sophisticated and interesting sometimes, it goes against the heart of what we came here to build in the first place: an accessible, interoperable Web for all. Molly Holzschlag, “Web Standards 2008: Three Circles of Hell”
It used to be necessary to employ Flash to handle audio and video or present web typography beyond commonly installed system fonts. But that’s no longer the case. HTML5’s
<video>tags are now widely supported, as is the CSS
Of course, that those technologies exist is very different from understanding their features and limitations, not to mention exactly how widely supported they really are on current and legacy browsers. (And if you want to lose all hope and an afternoon, read up on the state of CODECs and media containers for delivering video files across all browsers. It’s depressing. But the thing that will most help the situation improve is further research and involvement from a larger, more diverse group of people.)
If you’re using a hi-fi piece of production software that embeds videos in HTML, it may do nothing more than ask for the location of your video file. It’s not going to necessarily alert you to issues users might encounter, or provide fallbacks for users with older or less capable browsers. And if the software does provide a fallback, it might not be the kind of fallback you want to present. It might just be another error message and notice to upgrade.
Those kinds of concerns illustrate why lo-fi production is so dependent on research, and why GUI-driven software that promises to deliver one-click solutions to those kinds of problems should be treated with suspicion.
It doesn’t take much research to find hi-fi production technologies. They’re well marketed and have plenty of brand-name recognition. They come pre-installed, often in broken or incomplete form, on consumer PCs, and are likely on many of the machines in the computer labs at schools and universities around the world. They’re also on the computers found in most office cubicles, which more than any other scene of computing seems to be the primary inspiration for both campus and personal computing.
Ask someone why they chose a particular technology for a project, and you will often find one little feature driving the decision. It’s astounding, for example, to discover that people choose to set up WordPress to run a small website simply because they wanted a way to repeat the navigation across the four or five pages that made up the site. For that one feature, they pay the tax of securing a database connection and applying software updates for the life of the project, lest the infamous pharma hack or one of its many variants compromise the site. Had such a small site been built with basic HTML, or a static site generator like Jekyll or Wintersmith, no updates beyond those routine to the web server itself would likely be needed.
On its face, something like WordPress looks lo-fi. WordPress is all open source, all built on a simple setup of lo-fi technologies: the LAMP stack, or Linux, Apache, MySQL, and PHP. But it’s actually the MySQL database that invites a closer look and further research. A database might be lo-fi on its surface, but a database is best employed only under one of two conditions, usually both: first, there must be far more records than can be reasonably handled by flat files (that is, a database record per page of a website, rather than an HTML file per page). Second, database-like things must routinely be done to those records: sorting, counting, joining, and so on, in the context of more read–write operations than can be handled by flat files. A five-page website that’s infrequently updated does not fit that bill.
The kinds of research required for lo-fi production is always aimed at a particular problem. Maybe it’s how to handle templating in a lo-fi way: If a solution includes a database or an oversized code library, more research is needed. Or maybe, as in the earlier CSS
@font-faceexample, it’s the problem of loading a custom typeface onto a web page. Which opens up questions like Why that typeface? And then How to load the custom face? And that in turn should open up an investigation to what the consequences are, both in terms of legality (font licensing is a particularly thorny issue) and user experience. Typefaces eat up bandwidth like any other media asset. Is it worth the potential expense, on metered connections, or the wait to load the typeface? Especially when adequate, if not perfect, typefaces may be readily available on the reader’s device? Then there are other considerations: Certain typefaces, particularly icon fonts, map letters and numbers to particular icons for ease of use, which may result in weird accessibility issues for users of screen-reading software. On certain displays, typefaces that have not been manually hinted may look just terrible, undercutting the very aesthetic that motivated loading a custom typeface to begin with.
In lo-fi production, every single feature and consequence is a potential avenue for research. Nothing, not even something as low-level as a typeface, should be mindlessly dropped in and glossed over. And particularly when only a hi-fi solution seems viable, there is always the need to push a little harder on the actual production problem, and research accordingly.
- 6. Version control. Always. Everywhere. For everything.
Having the entire history of your project available to you is the key benefit to any version control system. Travis Swicegood, Pragmatic Version Control Using Git
Next to the plain-text editor, there is no more important piece of software in a lo-fi stack than a version control system. It is the piece that makes experimentation possible, reduces the friction of collaboration across time, space, and platforms, and makes learning and the sorely lacking component of revision a central part of digital production.
With minor variations, version control systems (VCSs) organize projects into repositories. The repository is both the files that make up the project, and their history. In some VCSs, that history is limited to a certain number of most-recent changes; on others, the repository’s history goes back to the very beginning of the project.
Git is probably the most widely known VCS, thanks in no small part to GitHub, a code-hosting site based on Git that is in no way required for using Git itself. But there are many other version-control systems available. The best of them share with Git one primary feature, and that is that they’re fully distributed: Any one copy of the repository is independent from any other copy. That means work can go on uninterrupted even if you’re without an Internet connection, and it frees you to work however you please without having to make all of your work public. But when work is ready to be shared publicly or with a team of collaborators, the VCS steps in to assist in sharing that work, rather than inviting the clumsy intrusion of email attachments or generic cloud-storage services.
index-old-02.html, and so on, quickly falls apart when files need to reference one another through URLs or load or include statements. A good version control system takes that burden away from the file system, while having no problem at all with recording a single change across multiple files.
But version control isn’t just for recording changes. Many version control systems act as development platforms not only to record changes, but to act on them. Git, for example, includes the ability to run scripts before and after certain actions. Pushing changes to a remote server can trigger a script that moves the updated files into place on a world-viewable web server. Rather than messing around with error-prone, bandwidth-hungry FTP software, a simple
git pushfrom the command line is all it takes to make the latest version of the site world-available. Projects like Capistrano add in their own advanced functionality on top of Git to handle more complex development stacks, which might include databases and other services that require configuration, maintenance, and restarts as part of deploying a project to a live web server.
Version control is also a huge asset to learning new languages and frameworks. Habitually creating repositories for working through examples in books and tutorials make it much easier to spot changes that might not directly be mentioned by the author. The use of branches also support exploratory changes that deviate from the book or tutorial’s advice. And that’s often where deeper learning can happen.
For those who teach, version control represents an essential, missing part of digital pedagogy. What matters in student work is not what the project is at any given moment from a first draft to a final project submission. What matters is what it was, and what it next became. In between those two points in time is where learning should take place. A student coming for help with a broken project that used to be working can, with the assistance of version control, trace the exact moment in time it ceased to function. The instructor in turn learns of a key piece of teaching that might have failed, or an object lesson to teach from in the future.