Hello again, welcome back. A little treat to spur on 2012 now, with an animated interactive poem by agency Studio Juice, written by singer Laura Marling and illustrated by artist collective Shynola, entitled The Beast. Taken from the song of the same name, from her latest album A Creature I Don’t Know, it describes the narrator’s affair with a character both alluring and sinister – a haunting tale of forlorn love. Marling is an amazing song-writer and poet, shown in both her previous work, and the verses she has penned for this project. These duet with the expressionistic scratchy illustrations and the narration, conjuring dreamlike spectres which course through the poem and the readers mind.
Projects like this, that intersect the realms of poetry and digital mediums and distribution channels, will hook new audiences that are used to more than just the written word. Despite my belief that pure text should be enough for interested readers, when done in this harmonious manner it works brilliantly. Kudos!
I focused on how technology can enhance and change our engagement with narratives in a previous post, so I’m going to step back and look at the highly immersive nature of text-based books as a medium.
After recently finishing a book and scanning my shelves for my next literary foray, my eyes settled on a fairly large book, and although initially daunted by its length, knowing that it would take me a fair while to finish even if engrossed, I soon started to relish the idea. I realised I would have a portable, episodic experience that I could dip into for the next few weeks, becoming instantly immersed as I did so – the narrative spurring ever more interest and giving heightened importance to the outcome (due to discovering more about the characters and investing in their stories), and possibly even gaining relevance to external events as I progressed.
Being able to burn through an entire book in one go makes the experience rather like watching a film; reading it in parts is more akin to a TV series, or a video game with a story that is revealed as the player moves ahead. It could be suggested the latter two allow a greater level of expectation and intrigue to build between narrative points (due to the real-world time elapsed), but all three mediums still dictate visual messages to the audience, albeit being open to multiple interpretations. Books allow the reader to paint their own visuals in their mind, forming structures within, giving characters familiar faces from their own lives, and grasping unique meanings from what is said and done, filtered through their own past and ideologies. In short, they are dictated by readers as well as authors, leading to individual, self-contained experiences which change as they are reread later on in life.
It will be interesting to see how as technology constantly moves forward and the standard of presenting stories evolves beyond text and the spoken word how this experience might be preserved. Might it even be mimicked, through bespoke forms of virtual reality systems, or audio books where the choice of narrator is tailored to the listener?
After my previous post speculating on the ways touchscreen devices will change the way readers engage with books and other texts in the future, I recalled an interesting example in the present.
The iPhone and iPad ‘A Visit from the Goon Squad‘ e-book app provides an option to re-order the hectic, backwards and forth narrative into chronological order, or even shuffle the chapters at random. However, these options are only available once the book has been read in its original order, meaning Jennifer Egan’s intended meaning won’t be lost.
This reminds me of cheat functions in video games, often unlocked once players complete the main body of the game (commonly known as “story mode”), granting them new ways to play and the ability to revisit past levels. This parallel seems like it could develop in the future – we might see e-books that reward readers for their time, or even their ways of interpreting the text, perhaps via intelligently recognised digital annotations, conceivably being used in an education context.
I suspect that being able to automate our interpretations and responses to literature and other art forms isn’t an entirely good idea, however. I think technology should facilitate and enhance engagement with them, but not instrumentalise the human element – our spontaneous, inspired, and unique reactions to works of art.
Yesterday I watched a video on YouTube of a child attempting to manipulate a magazine as if it were an iPad.
Eh? Bear with me.
As expected, the futile motions and the child’s baffled reactions are pretty funny, but it also made me ponder once again how touchscreen devices and future developments in technology will influence children’s perception of and attitude towards books, but more importantly, the act of reading itself.
Whilst digital content is currently co-existing alongside traditional printed media, it’s quite conceivable that in a decades time when it has the potential to overshadow it’s paper kin (rather than outright replace it), a child might live throughout their early years – before they have the opportunity to venture into the world alone and discover alternatives – rarely, if ever, reading “old” books and magazines.
If children only know books and applications that can employ videos, music, games and reader interactivity in a wide variety of ways, will paper and ink still be fulfilling? Will classic literature need to be remade in new digital dimensions to be valid for the next generation? There will certainly be very interesting and immersive techniques that will enable readers to connect with stories in unique ways, but I fear that older works might be neglected.
However, there’s also the possibility they will turn to printed books, and the contemplative, often passive manner of reading they foster, as an antidote to a constantly active, sometimes overloaded medium. It seems context plays a large part here – how would a reader focus on and engage with a multitude of different medias whilst braving a packed rush hour train journey, with all the physical restraints and stressful stimuli that entails?
I apologise in advance for any work put off due to random video YouTube tangents as a result of this post.
“3AM is the dark heart of the city, when the carefully repressed anxieties, aspirations and dreams of its emotionally parched inhabitants can no longer be contained”
Elena, who is with us at the Proboscis studio under the Leonardo Da Vinci scheme, used a very eloquent excerpt from Night Haunts: A Journey Through the London Night by Sukhdev Sandhu, in her post accompanying the visual essay she is currently composing, Mapping The Streets. The book runs parallel with some of the themes we’ve been exploring for City As Material, particularly the notion of an outsider’s forays into a hidden landscape – in this case, ironically, a world normally veiled by the light of day.
I immediately set out to buy it, but soon discovered it was available in full online, as it was originally commissioned by Artangel Interaction as a web project, with chapters, or “episodes”, released monthly. The website uses ever-shifting, distorted pixels and visuals as a backdrop and ambient sound paired with the text, both emanating an eerie nocturnal resonance, as the reader delves deeper into this insightful and poetic work.
Before the event, we were asked to devise walking routes to create individual cubes, each side featuring a QR code, linking to a particular geographic spot on an online mapping service (Google Maps, OpenStreetMap, etc) – a start point, four waypoints, and a destination. Using an API Gordon coded, and the bookleteer API, entering the six location URL’s automatically generated a StoryCube. My route, based around memorials and tributes in different forms is available here.
Meeting just after 2.00pm, Simon and Gordon gave a summary of the project, and a recap of the development process so far. We talked about the current limitations of Google Maps when creating the cubes, particularly the inability to share manually added, user designated routes with other people (they require two waypoints to locate the route), and had some interesting ideas regarding the next stage of the project. What about a mix of map links, audio files and videos – an interactive tour, scanning QR codes near points of interest to access audio descriptions and related videos? Or, a quasi treasure hunt, requiring players to obtain QR code stickers for the cubes (discouraging them from scanning all the codes at once – cheating!) from certain spots to get the next destination?
We decided to use Simon’s cube for our first trial, his route focusing on locations acted on by “centrifugal and centripetal” forces – each point “acting as an attractor of sorts, which in some instances cannot be reached, yet which pulls the walker towards it”.
After departing from the studio, Giles scanned the first code to get our start point – the ramp under West Smithfield. Once there, we scanned the next spot, the middle of Charterhouse Square. All was going smoothly. However, after reaching the third spot, the omnimous brick circle in Golden Lane estate (the “Unplace” we featured in the City As Material: Streetscapes event), we were unable to load the next, despite trying with numerous phones – bad signal, or bad omen? Despite this, we were afforded time to ponder its unusual acoustic properties once again, and plot a cunning plan to subvert this synchronised failing of technologies… cheat!
Simon told us his next waypoint, the Curve Gallery in the Barbican Centre, which we arrived at via its winding walkways (after ceremonially scanning the code we missed). Another hurdle faced us here, as to gain entry to the exhibition, we were expected to don quarantine-esque shoe covers, and couldn’t enter as a group. Bah. The penultimate spot, another circle, on Monkwell Street, beckoned.
From there we were awarded our destination, the Museum of London, or more specifically, outside its entrance. Here, we asked if we were able to get into the recently renovated green space below, and were told “perhaps, but you might not be able to get back up!”. Rather than risk it, we retired to the pub right next door, content in a mostly successful first run of a StoryCube Cairn route.
We’re brimming with ideas for what might be possible next. Until then, view all our routes, and download the cubes yourself here.
On the fabulous The Literary Platform I came across this video Ideo have produced showing three concepts they have created around the future of the book. I love Ideo, they consistently come up with inventive and imaginative technological developments that take account of social factors and personal practices. However, I have to say, I am disappointed with their ideas for the future of the book and I’m surprised that they appear to have overlooked so many of the interesting questions around books as objects, the challenges of e-Readers and the augmented reading experience that are currently being considering in so much detail by others.
All three of the concept designs (called Newton, Coupland and Alice) are shown as prototypes for the iPad. This suggests to me that the idea that a book might be a souvenir of an experience (e.g. James Bridle) or an object for sharing (e.g. Bookcrossing) does not appear to have been considered in the design process. In my exploration of augmented reading over the past few months I have come to think of a book as the amalgamation of object, content, design, distribution method, author and reader. It might be getting a little pedantic but I would say that what Ideo have produced are prototypes for the Future of Reading rather than the Future of the Book.
So what will this future reading experience be? We are offered three versions.
Newton might best be described as an application for managing material already published on the Internet. It allows you to collate, compare and contrast different sources and materials around a particular topic.
Coupland is a form of book-related user-generated content and social network. Reading lists and recommendations can be compiled and shared allowing everyone to see and comment on the most popular books within a professional network. Individuals can contribute book reviews and content can be shared between different organisations and networks.
Alice combines hypertext, hypermedia and location-based services to create an augmented, reader-created narrative path through a story. Primarily presented as text-based Alice suggests that readers actions (in the example, tilting the iPad in a particular direction) might open up new branches to the story. Other actions might include being in a specific location where a particular set of GPS co-ordinates would trigger more of the story.
One of the most interesting aspects to me is how these future ‘books’ conceive of authors. While all three concepts require authors for the ‘book’ to be complete they each have a different model. Newton relies on writers who are producing content elsewhere on the Internet and Coupland relies on people within an organisation creating content for the ‘book’. Only Alice has bespoke writing and a dedicated author at the heart of the project which is then augmented by existing content. These approaches to authorship are not new of course but I find it fascinating that Ideo consider all of them to be examples of ‘books’ and I wonder how these fit with my concept of book-as-object-plus-content-plus-design-plus-distribution method-plus-reader. I can’t help feeling that the ecology of books is broader and more diverse than these concept designs acknowledge.
Andy demonstrating Tales of Things at Be2Camp Brum 2010; via Meshed Media
Today’s post is another presentation I heard at Be2camp Brum 2010 last week. (It was truly an inspiring and thought-provoking day!) Tales of Things was presented by Andy Hudson-Smith from the Centre for Advanced Spatial Analysis, UCL. Tales of Things explores social memory and asks what happens if we can tag objects in our everyday environment and track these objects – even after we’ve passed them on to someone else.
Entering details of an object into the Tales of Things website allows you to generate a unique QR code for that item which can be printed out and attached to the object. When the QR code is ‘read’ by a camera the web page for that object is triggered. Because Be2Camp Brum was loosely focused around the theme of libraries Andy used tagging books as an example, suggesting that tagged books would be able to use Twitter to keep previous owners up to date with the book’s current location and status.
The Tales of Things website suggests that:
“The project will offer a new way for people to place more value on their own objects in an increasingly disposable economy. As more importance is placed on the objects that are already parts of people’s lives it is hoped that family or friends may find new uses for old objects and encourage people to think twice before throwing something away.”
Promoting the sharing and exchange of objects in this way is obviously interesting in the context of bookleteer and I did actually tag a couple of eBooks with QR codes generated by Tales of Things for Pitch Up & Publish 10: Augmented Reading a few weeks back. Perhaps it’s time for me to go back and revisit that and see where it might lead..
If you want to read more about the project see here, or if you just want to get on and tag your stuff then look here..
While bookleteer works to make publishing accessible to everyone regardless of skill, software or money, Pesky People are working to make online reading accessible to everyone. For Pesky People accessibility is about highlighting and campaigning for equal access to the internet for deaf and disabled people.
Alison Smith, the founder of Pesky People spoke at Be2camp Brum 2010 last week and gave us a sense of the difficulties faced by deaf and disabled people everyday as they access the web. For example, very few online videos are subtitled making them often inaccessible to deaf people. As this was an ‘unconference’ about where the built environment meets Web 2.0 architects didn’t get off the hook either as she pointed out that fire alarm systems that rely purely on sound can easily be missed by deaf people and illustrated the difficulties that even supposedly accessible toilets raise for disabled people. She also showed this short film imagining equal access for deaf criminals..
I found this a powerful presentation and it certainly made me realise once again how much I take for granted and how easy it is for this to slip into design decisions that unintentionally marginalise deaf and disabled people. And if you’re a web designer and a warm and fuzzy feeling of being good to fellow humans isn’t enough to persuade you that we should work towards accessibility for everyone then Alison pointed out there is also a legal responsibility to make your website accessible…
Design for Library of Birmingham by Mecanoo architects
Be2camp Brum 2010 was loosely themed around libraries. A new building for Birmingham Central Library (where Be2camp Brum 2010 was held) is currently under construction and due to open in 2013 and the first three presentations at Be2camp Brum were concerned with how digital technologies are being integrated into the planning and construction process as well as into the library services and building itself.
Brian Gambles speaking at Be2camp Brum 2010 via Meshed Media
Brian Gambles, head of BCC Library Services, outlined the broad overview that is being taken to the use of digital technologies, emphasising that they are designing for maximum flexibility and adaptability and aiming not to be platform-specific as they assume that digital infrastructures and technologies will change over the lifetime of the building. Brian emphasised that the aim is to redefine and reimagine the relationship between library services, the library building and library users through digital technologies.
Tom Epps then spoke about one of the ways this is taking place. Alongside the construction of the new building, a virtual model of the new Library of Birmingham building is being built in Second Life. This model is to scale and Tom spoke about how this is providing a better sense of the relationships between different elements of the building than it’s possible to get from architects plans or non-interactive 3-D model. Once the Second Life Library goes live it will also be used for public consultations to gather people’s opinions on the new design via polls and feedback points, and possibly to host events paralleling the physical Library building and services. (And it was so impressive that the whole presentation was done while we were being expertly navigated live around the Second Life model live!)
We then heard a little about the role of mobile technologies in re-imagining library services (I’m afraid I didn’t get the speaker’s name) and a description of how library services and activities will be augmented by mobile personal devices and applications.
All in all it was great to hear that the Library are taking such an imaginative approach to the integration of digital technologies and working on platform neutrality and personalised services that open up great library resources – such as their archive of photographs – to city residents and library visitors. I really hope that this emphasis on the experience people have in the library will continue to inform all of their decisions. And I was only slightly disturbed that their Second Life model which professes to show how the library will be doesn’t actually have any people in it yet…