This post was provoked by New Aesthetics, New Anxieties that was provoked by the new aesthetics movement.
I will start with a few comments on the book (which I highly recommend BTW).
One point that the book clarified for me, even though NA isn’t explicitly mentioned, is that the “algorithmic approach”/”what the machine sees” criterion of NA (which has always seemed off base) as it is used, in practice, highlights what the algorithms have selected for human consumption/human review rather than what the algorithms “see” themselves.
You won’t be presented with the features and feature extractors that comprise the algorithm, only with the sections of the images (or particular segments of the video stream) which the algorithms have selected as being potentially interesting for human review.
As you would guess I don’t give much credence to the idea that
The New Aesthetic, in this case, can supposedly “help us imagine the inner lives of our digital objects” picking up on the “pigeon language” that takes place “between their inaccessible inner lives and ours” (2012 Borenstein,G.(2012)’What’s it like to be a Thing?’,TheCreatorsProject)
I can’t quite imagine where this comes from, since the images appearing on the new-aethetic tumblr, consist primarily of “human consumable images”, not arrays of numbers (as numbers), not feature vectors, not screenshots of logic analyzers etc.. There’s a fundamental failure of imagination here. A failure not shared by Object Oriented Ontology (OOO), even though at first glance OOO shares a similar goal. Whatever its flaws, OOO does try to push down into the (difficult to imagine) experience of the object, as a material thing.
Now, on to the thoughts provoked, which revolve around the term: DIGITAL.
“Digital” can be
- a way of coding information/data
- a way of transmitting information/data
- a way of analyzing, operating upon or compressing information/data
- a source and process of decoding data into a form comprehensible to our senses.
It is clear that digital involves some form of encoding, and most definitions involve discretization. Digitization may involve a loss of information (if, and only if, something interesting is happening at a finer grain than the discretization can capture). Even though many of our current digital technologies can resolve changes far finer grained than those characterized as just noticeable differences (JNDs), the perception of discretization resulting in loss is common, and might be due to the old sampling rates for audio CDs.
However, once the encoding has taken place, perfect reproduction from there on out is readily achievable (which is the source of RIAA panic).
The book draws a useful distinction between software and code(d objects). Having done a lot of software over the years, I tend to use the terms software and code interchangeably.
However, upon further consideration, it’s useful to distinguish between the two, and consider software as an executing program and code as either a chunk of instructions or the digital (discretized) encoding of a “thing”/”object” which may or may not have an existence apart from its coded form. 1 Either way, it can then be transmitted with complete fidelity, compressed with various compromises on fidelity, or used as the basis for an operation which is only loosely related to the input.
In colloquial use, digital primarily is used to represent newer, more abstract forms — we can see why this happens: discretization, and manipulation, followed by (essentially) error free transmission gives this arbitrarily abstracted form a life seemly distinct from space and time, which encourages a sense of “not being in the moment”
Digital, then, is the combination of encoding, manipulation, and decoding. The encoding and decoding steps can suffer from discretization/sampling effects (which are arbitrary or technology dependent, but usually not fixed for all time, given the increasing resolution of image sensors and displays).
The encoding allows for faster transmission of the digital objects (at least for resolutions commonly in use), while the manipulation can be arbitrary and governed by interfaces that are less intuitive but allow more control.
Since the decoding part is performed on demand, and tends to be ephemeral, it does not exist for us without computer mediation. Coupling this with the difficulty of knowing where your digitized, coded object is actually stored, the feeling of ephemerality becomes acute (in general it is very difficult to determine how many copies of a particular piece of data exist and where it is stored) 2
I think this is part of why “Netflix doesn’t count as watching TV” since if you “watch TV” in the most conventional sense, you’re sitting in front of a TV at a particular time during which that show is “broadcast”. It doesn’t matter if the show is delivered to you on a digital cable box (hence is literally digital, and also suspect in its “broadcast” properties), since you ostensibly don’t have that much control over the timing (conveniently ignoring things like TiVo).
Similarly people generally think of their car engines as being digital, partly because they don’t have a user interface, partly because they usually don’t interfere with “normal operation” (unless you’re reviewing cars for Top Gear) and don’t go all wonky on you at arbitrary moments (although the Toyota acceleration problems did bring attention to car computers).
Addendum: One interesting corollary to this is that digitally encoded information which isn’t accessible, isn’t any way digital in the colloquial sense, e.g., data on an 8 inch floppy written by a PDP-11 running RSX-11D isn’t digital (not saying it couldn’t be made digital, but likely isn’t given what most people have ready-to-hand)
- For example an uncompressed image (discrete value(s) for each pixel) could have been captured from an image sensor or generated by an executing program. ↩
- For example, if you recorded a new piece of audio, that never existed before, and stored it on a cloud service, a number of copies would start to be created as you accessed it from various places, backup and redundancy algorithms created copies etc.. On the other hand, if you downloaded an existing piece of audio (lets say purchased from amazon or apple) it is likely that no extra copies of it would be created in the cloud service and what you see as the file would just be a reference to a pre existing copy. ↩