Product designer

Front-end developer

Full-stack UX design specialist

For business enquiries or casual chatter—do drop me a line at lp.frtoip@olleh. I ♥︎ emails.


ArticleInteractive chat as a new content type


Two weeks ago, after well over six months of an incredible effort, Al Jazeera rolled out public beta version of a set of sophisticated tools that allow to compose and publish an entirely new type of content—a scripted interactive chat.

The goal of the entire endeavour was to create a more engaging and inclusive storytelling experience around the interview format. We worked under the premise that we can achieve that by letting the readers directly interact with the characters at the heart of a story. All through one-on-one message exchanges.

As the user experience designer and front-end developer on the project I had the pleasure to help design and build the ground-breaking product. Here’s my take on it.

Welcome InterviewJS

InterviewJS puts readers at the heart of a story allowing them to directly engage with the characters involved via a chat-like app. Of course, chats have been around for a while but the possibility to script a conversation, then edit, publish and share it—like you’d normally do with a blog post—is completely new.

Al Jazeera’s InterviewJS is a workflow tool that allows to create and distribute scripted interactive chats the same way that Wordpress does with plain articles. It’s both an editor and a publishing platform. You can sign in, compose your piece and publish it as you’d do with a blog post.

Access to the platform is entirely free which enables anyone to start composing scripted chats. Best of all: Al Jazeera is completely opening the source code which means that you can easily fork the code repository and contribute back with pull requests or simply set up your own instance of InterviewJS.

The scripted chat

InterviewJS Scripted Chat Example

A scripted chat is an interactive chat based on a real interview transcript or a script designed by the storyteller. End-readers interact with the interviewees directly by making comments and asking questions as if they would be leading the conversation when, in fact, they’re following a path set out for them by the creator of the story. Readers’ choices have effect only on the order in which the content is being served.

Such chats allow any kind of content—the reader can request and receive not only texts, but also links, images, videos, maps and other embeds. And although it enables scripting questionable story scenarios—where, for example, Barack Obama sends you an animated gif—it does empower storytellers to craft their stories as they see fit.

What it’s best for

InterviewJS works best on stories with a few characters, preferably with contrasting views but not exclusively. In fact, one of InterviewJS pilot stories is a one-on-one interview with Snowden. Stories where multimedia, maps, videos and links are being shared by interviewees are likely to be more engaging for the end-reader.

And although InterviewJS is a tool created by journalists for journalists, it doesn’t mean that it doesn’t work for anyone else. Quite the opposite. And so I’m interested in seeing the non-journalistic stories people may be creating with it. I see it being useful in education, as an innovative take on FAQs, an element of an escape room experience or… you name it!

InterviewJS ecosystem

packages

Our work on InterviewJS involved creation of four different packages, each living its own life in a dedicated environment:

  1. The Story Composer — the only area protected by an authentication provider where story creators can sign in to manage, compose, edit and publish their stories.

  2. The Story Viewer — used to render published stories. It takes dataset of a story created by the journalist and renders it as a navigable, interactive chat application. Each story has a unique URL where it can be accessed. And—as stories are not publicly listed anywhere—your piece remains secret unless you share it.

  3. The Style-guide running on Catalog — a “living” design documentation and front-end architecture reference. It lists the library of custom-made reusable React components we developed and used to assemble InterviewJS’ UIs.

  4. The public-facing website: https://interviewjs.io

The process

InterviewJS is a fruit of work of an incredible team of journalists, producers, designers and developers spread across 5 countries and just as many time zones. We did occasionally meet in person—thought not altogether at once—but most of our collaboration was remote. Aside from the occasional team gatherings in London and Doha, we mainly used Slack and email to communicate and appear.in for our meetings and remote usability testing sessions. It all worked wonders with only a few occasional glitches.

InterviewJS Team Meeting

Design

Our design phase went on for a little over two months. We worked off of early sketches created internally at Al Jazeera that I took as a base for subsequent design iterations. They were not “prescriptive”—as Juliana Ruhfus, the coordinator on the project, continuously stressed out from the early days—but I found them to be visionary and they shaped greatly the direction in which we went with the polished designs.

InterviewJS Early Sketches

From then onwards, I’d use Paper to draw rough sketches and early wireframes, Sketch to design the user interface and Invision to build an interactive prototype. While we’re at it, it’s worth mentioning how I found prototyping message exchanges incredibly frustrating and counter-productive. After a few trials I quickly began avoiding to prototype the editorial elements of the product and focused on the core UI.

Altogether, we spent roughly about 30 days working on the final designs. The only way I know this is that each time I’d be doing revisions, I’d duplicate previous version of the Sketch file and name it by the date of edit.

Once I had the final designs ready to be tested, I quickly linked the views together with Craft plugin for Sketch and then pushed everything to Invision. A couple of days later, we already had fellow designers and storytellers playing with the prototype and feeding back with invaluable insight.

InterviewJS Invision Prototype

Development

We kicked off development with creation of a mono repository holding all required packages. Our living style-guide, running on Catalog, was then the first to see the light of day. We needed it in order to feed all other packages with a set of custom made React & styled components that we’d later use to assemble UIs of both apps: the Composer and the Viewer.

We then moved to building Composer views and started feeding them with some dummy .json data using React-Redux. This may have been the most problematic part for me as I haven’t really done much Redux beforehand. Enter Wes Bos and his thorough “Learn Redux” online intro course—after watching the thing a couple of times I was ready to bring life into the Composer.

What happened next was probably the hardest dev sprint I’ve been subject to in my entire career. Of my own free will, that is. Being the sole front-end dev on the project I obviously wanted the thing to shine. And the perfectionist in me was convinced I was building a cathedral. This translated into many sleepless nights and short weekends obsessing about the tiny details very few would notice. One of the lessons I’d like to take out from this endeavour is to adhere to the 80/20 rule more in the future 🤔.

Two months in, I handed over Composer’s front-end to @gridinoc and switched to implementing the Viewer. That was fairly straightforward with just a couple of exceptions:

  1. We wanted the stories to be played back without needing to log in, yet we needed to save readers’ progress for them to be able to restart conversations from where they left off. We therefore needed to rely on localstorage which has obvious size limitations depending on the device you’re accessing the site from. We chose to create history array holding reader’s path in the chat that would reference items from the source storyline array. Although it wasn’t the easiest to debug later on, it probably saved us quite some hassle in fixing inline base64 assets filling localstorage quickly.

  2. When the reader is presented with a binary choice, which interviewee’s bubble then the Viewer displays after tapping on either of the CTAs? What happens if there are more interviewee’s bubbles in a row? Can the interviewee start a chat or the user or both? What happens if the readers reach the end of a chat? Although answers to these questions may seem obvious now, but we really needed to crack on this for a while and refactor our Chat.js multiple times to get this right.

Testing

I had the pleasure to conduct just a couple from the many usability testing sessions AJ run on InterviewJS. In the early days we tested remotely, via appear.in, on an Invision prototype. Once the project reached alpha in late March, we set up a collaborative testing workshop in London. Unfortunately I had to miss out on the following testing sessions in London and Doha. Though, after all these years, I still find them to be the most gratifying and joyful learning experience for the designer in me.

The Story Composer

InterviewJS App Tree

The Story Composer is a fairly complex beast with quite a few views, independent user flows, a bunch of modals and other contextual items. The figure above illustrates all incorporated states of the app which, once you’re passed authentication screens, we can easily narrow down to: story library, story creation wizard, the actual story editor and story publish wizard.

InterviewJS Composer

The editor is where most of the magic happens: storytellers create new interviewees, store interview transcripts, add user actions and interviewee’s responses. It’s also where they get to preview their chats. The central area of the editor serves as a storyline canvas where story creators add interviewees’ speech bubbles (left panel) and end-readers’ actions (right panel).

The Story Viewer

At the core of the Story Viewer is the actual chat with an interviewee. It’s a one-on-one conversation which comes down to: a) end-reader asking questions, each user action becoming a speech bubble appearing from the right b) interviewee replying with text or media bubbles appearing from the left.

Aside from the actual chat experience, InterviewJS gives the author means to ease readers in and out of the chat by elegant introduction and outro screens—all to guarantee a continuous storytelling experience.

InterviewJS Viewer Intro InterviewJS Viewer Chat InterviewJS Viewer Outro

Each InterviewJS story has its own unique ID used to generate its public URL and reference data being saved onto browser’s localstorage. We use localstorage to save end-reader’s chat history and poll choices. We need the former to allow readers to return to unfinished chats and pick up where they left off as well as to calculate the score of how much information was consumed by a single reader. We use the latter to block successive poll submits.

The navigation is linear and straightforward. There’s usually only one way to move forward. At any point in time readers can go back a step in the flow all the way to the intro screens. Story elements—such as title and byline—and information about the platform itself are also available at all times. These details populate story’s meta tags which social networks rely on for generating link previews.

The fine details

Having spent an enormous amount of time polishing up InterviewJS designs and tweaking micro-interactions, I thought I’d share a small compilation of the “little big details” we thought about and implemented to make the product POP.

Responsive Viewer

Emojis

Handy shortcuts in the Chat

Dancing bubbles when editing previously added chat nodes

Intro tour to the Composer

Tablet-friendly Composer

Drag & Drop Composer

The possibility to re-order interviewees

Intelligent speech bubble colour coding

Simple opinion poll

The limitations

When talking about “scripted chats” we’re already discarding a subset of features that one would normally expect from a chat app. There isn’t much sophisticated AI behind InterviewJS, but a very simple ruleset that enforces certain conversation scenarios. Which is why end-readers can’t really type in their messages, or respond with selfies.

During our early tests, we found that even simple scripting logic can get complex quite easily. And although we did explore the possibility of allowing to script simple “explore” loops and nest threads, it became evident soon enough that storytellers struggle to script more complex storylines. Which is why we settled on simple branched narratives as outlined below.

InterviewJS Script Scenarios II

“Simple branched narratives” is a fancy way to say that end-readers, when presented with binary choices, can either go one way or the other. As such, it may as well happen that—unless the script has been cleverly structured—readers may never consume 100% of the content. We’re absolutely cool with that.

The value

The release of InterviewJS is important for several reasons. Here’s a few that matter to me the most:

  1. By launching InterviewJS Al Jazeera enables storytellers to craft an entirely new type of stories without having to worry about the technical side of things.
  2. By relying on web technologies AJ allows for scripted chats to be widely available for a complete range of platforms and devices. In fact, InterviewJS stories perform great on phones, tablets and desktops no matter the operating system.
  3. By designing and developing the product in the open we hope to create a community of web technologists willing to contribute as well as inspire and educate young designers and developers working in the field of online storytelling.
  4. By completely open-sourcing the code AJ enables publishers to be able to integrate the tool within their workflow, get involved and help to evolve the product once it’s out of beta.
  5. By releasing it for free AJ wants everyone to start creating their own scripted chats.

Next steps

These are early days for InterviewJS. As we’re currently testing the product ourselves we’re also interested in comments, suggestions and bug reports coming from the community. We have already identified a bunch of issues we’re trying to fix and enhancements that, we’d hope, will make their way into the next public release before the product goes out of beta. Our roadmap is open and available on Github where anyone is welcome to join the conversation.

Closing remarks

InterviewJS is special to me in many ways. I enjoyed working with the talented team, I was thrilled to be involved in shaping a groundbreaking product, I found it incredibly exciting to be creating a new storytelling tool, I loved that we could deliver such an outstanding piece of software in such a short amount of time and do so 100% remotely. But most of all, I found a great pleasure in working openly on an open-source product. For all that, I’m very grateful to be part of the InterviewJS team.

Where from here?

If you’re interested in learning more about the product, make sure to visit interviewjs.io. I warmly invite you to run through the pilot stories too—they’re great editorial pieces as well as fantastic examples of the full potential of the platform.

First InterviewJS Stories

Share your comments and/or suggestions with the team on Twitter or via email at interviewjs@aljazeera.net. If you’d like to look through the source code, you’ll find it hanging on Github. For all things server-side and infrastructure, hit @gridinoc while I’ll be happy to address any design and front-end related questions at @presentday. Thanks for reading!