Hack Jam - Storytellers United

Hack Jam - Storytellers United

This event was emailed to us through our student email by Peter McKenna. It was promoted as a "BBC Digital Cities Hackathon" with the following content:

We will be hosting a two-day hackathon in the Shed as part of the BBC Digital Cities program. The event runs from 27th to 28th February and offers valuable hands-on experience of object-based media production, including the use of video and HTML5/WebGL media processing JavaScript libraries such as videocontext.js and seriously.js.

As a creative hackathon, I was eager to attend, although the only tickets available were "Expressions of interest," meaning we would apply to attend and receive an email at some point to confirm our participation.

Luckily, I was among the approximately 60 others who received an email from EventBrite with the following sentence:

Storytellers United, BBC R&D, University of York are very happy to announce you have been selected to take part in the Storytellers United Hackjam event from 27th - 28th Feb.

A few days later, we received another email with the schedule, recommendations, and the Slack channel link.

Day One

After arriving at the Shed early at 9 AM and waiting for others to arrive, Philo welcomed us to Storytellers United and made us join together in a circle to introduce ourselves. We had a twist of "If you were an emoji, which emoji would you be?" Obviously, I chose 🧙 (a Wizard, Web Wizard).

After a few talks, we were split up into our allocated teams, which were created prior to the event to balance out the skills in each team. Unfortunately, a few members did not attend, leaving some teams lacking certain skill sets.

Luckily, my team comprised of Marsha Courneya and Donna Wood, both writers who had done small parts of video and editing work before. We seemed pretty balanced and ready to start generating ideas.

Ideas

Initially, we discussed the idea of surveillance and how data reigns over us, how dystopian the future could be, and how data is sometimes not put into context, defining us as data but not properly.

Donna gave the example of how she researches a lot of politically right-wing websites for journalism research. However, to advertisers, this could make them think she is politically right-winged, even though that may not be the case since the searches aren't in context.

From there, we had a few rough ideas on how to use this concept to tell a story. We explored ideas involving slot machines, crystal balls, and deities.

Workshops

After a short time to get familiar with our team members and start generating ideas, we were introduced to a few tools by people who work on them. These tools could help us in producing our ideas.

Two tools of particular interest to me were videocontext.js, a JavaScript library for sequencing and applying effects on videos easily in a canvas, and Cutting Room, a visual interface that uses the Video Context library.

I attended the Cutting Room workshop presented by Davy Smith since I could read the videocontext.js documentation on GitHub and hopefully use it without much difficulty if needed.

After the workshop, we had a small lunch break and then returned to our teams to refine our ideas and develop a more concrete plan.

Idea Refinement / Development

Eventually, we settled on the idea of the "Data God" as a figure to narrate our story. From there, we discussed what data the Data God would use, how it would form the story, and how it would work technically.

We chose YouTube videos as our data source after some discussion. I suggested Twitter initially since I knew the API was easy to work with, but Jonathan Hook helped me find a way to interact with the YouTube API without requiring OAuth tokens and such.

I started writing code to interact with YouTube, aiming for it to be easy to implement into Cutting Room, which Marsha was working on. Donna and our new team member Robin Moore from the BBC were producing the video content, with Donna handling the audio and Robin the GCI.

Throughout the development process, I continued working closely with my team, identifying useful aspects of the videos we could use as data and figuring out how to respond to them. Initially, we thought of using the video tags to determine the outcome, but without doing NLP (Natural Language Processing), it would be quite challenging given the time constraints.

After further discussion, we decided that the "Data God" would ask for a video that made the user feel good, one that they didn't like, and a guilty pleasure video. We categorized the videos into music, food, and animal videos (implemented for the demo) and provided generic responses like "that music hurts my ears."

During development, it took me longer than expected to realize that the YouTube API provided a category ID property for each video. It was a numeric string, and I found a mapping for it online.

As it was getting close to dinner time, I was working on the website and needed a relevant category for food videos. However, there wasn't a specific category for food, so I had to implement one.

After dinner (we had around 40 pizzas for about the same number of attendees), I wrote the following function to get a category, including food, before calling it a night in preparation for day 2.

function getCategory(video) {
    let foodBool = video.items[0].snippet.tags.reduce((sum, curr) => sum || curr.toLowerCase().includes("food"), false);
    return (foodBool) ? "Food" : categories[parseInt(video.items[0].snippet.categoryId)];
}

Day 2

Arriving at the Shed again at 9 AM, we had only three hours left until the deadline, so the pressure was on. We had come up with ideas overnight, and one we implemented was having the "Data God" suggest a video based on the one given. For example, if you provided a food video, it might suggest an exercise video instead.

From a development perspective, this wasn't too difficult. I created another array/object of suggested videos for each category. Using a function, it would return a video ID that could be used with the functions I had previously written.

function getSuggestedId(category, type) {
    let reverseType = (type == "Good") ? "Bad" : "Good",
        suggestedCategoryExists = suggested.hasOwnProperty(category),
        array = (suggestedCategoryExists) ? suggested[category][reverseType] : random[reverseType],
        video = array[Math.floor(Math.random() * array.length)];
    return (video == "") ? "NpEaa2P7qZI" : video;
}

This technical aspect is not crucial to understand, but because I hadn't fully populated the arrays, if it received a blank string/ID, I would use a placeholder video instead.

From there, it was time to combine my script with what Marsha was working on in Cutting Room, while Robin and Donna produced an example video to be used in the demo if we couldn't make it work in time. I briefly discussed it with Davy, who suggested converting my code into a class on a new branch. This way, it could be initiated in Cutting Room more easily. However, I encountered issues with async functions not being detected within the class, even after spending approximately half an hour on it.

While working on the class, I essentially did rubber duck debugging with Alexa Steinbrück as she mapped out the general user interactions. Davy came over and looked at it. Unfortunately, he initially misunderstood our approach, suggesting that Cutting Room was appropriate for us when it wasn't. It would have been easier for me to code it myself (which we initially had the option to do but didn't choose to distribute the work among the team).

With only one hour left, I attempted to get video interactions to work with the user flow and interactions on a new branch. Theoretically, what I produced in that time would cover the first half of the demo, relaying the "Data God's" response to a music, food, or animal video. However, it didn't work in time.

Demo

Luckily, Robin had created a finished example interaction video with Donna, which we played at the beginning of our presentation. Then, I demoed what I had completed for the website in the morning. It allowed users to give the "Data God" a video ID, which would be analyzed, and the system would suggest an alternative video to watch.

We were the first team to present, followed by the other teams, whose demos can be seen on Hack Dash. Each team's project was interesting in its own right. After all the teams had finished, we voted for the best projects in three categories.

Our team won the "Most Original Concept" award, and we hope to continue working on the project together. We also plan to keep in touch with everyone, especially the amazing organizers and mentors of the event.

FIND ME ON

EMAIL ME ON

EMAIL US ON